|
Slightly unusual request: I'm planning on upgrading my interface to something with more I/O (looking like a Presonus Firestudio Mobile) and I want to run separate mic pres into the line inputs. I know very little about mic pre-amps, but I do know about designing and building low-noise electronics, so I'm considering a clone of a high-end pre-amp. Could someone recommend their favourite pre that doesn't rely on some esoteric, archaic component? I would prefer solid-state over vacuum tube and don't want to spend hours hand-wiring transformers.
|
# ? Jul 5, 2010 16:12 |
|
|
# ? May 28, 2024 01:02 |
|
Could anyone recommend me a decent sound card/audio interface that will be compatible with a Lenovo? USB 2.0 would be preferred. I really wanted to go the firewire route with PreSonus gear but after reading into it there are far too many issues with the firewire card etc. that it seems the only way I could have it working is if I purchase a macbook (I want a pc). I'm looking at this one right here, what do you guys think? M-Audio Fast Track http://www.amazon.com/M-Audio-Fast-Track-Ultra-High-speed/dp/B000Z8U0IY/ref=sr_1_4?ie=UTF8&s=musical-instruments&qid=1278348789&sr=1-4
|
# ? Jul 5, 2010 18:19 |
|
WanderingKid posted:Mics? Hell thats even worse. You aren't going to eliminate ambient noise completely, even in a super quiet room You lost me here. Have you ever recorded in a studio that had soundproofed, professionally designed rooms? Ambient noise can absolutely be eliminated. quote:and then theres noise generated by the electronics in the preamp etc all of which is way more significant than the digital noise floor. I agree that good gainstaging and elimination of common electrical problems can minimise the noise from analogue gear. But they are all designed to work best around 1.23 Volts RMS anyway and when I stick that into the line ins on my soundcard theres still *loads* off the top of the fader before I'm touching 0dB full scale. Are talking about the fader on an analog mixer or your DAW? I don't record with faders in my signal chain, I use direct outs if there's a mixer but usually I'm using preamps built into an interface or outboard preamps connected directly to the ADC. Either way, if you still have appreciable noise introduced by your analog gear when tracking things like drums of vocals, you've got a problem. If you don't, then what are we arguing about? Technique? Style? Who cares I never said my way was the only way. RivensBitch fucked around with this message at 18:26 on Jul 5, 2010 |
# ? Jul 5, 2010 18:23 |
|
RivensBitch posted:Either way, if you still have appreciable noise introduced by your analog gear when tracking things like drums of vocals, you've got a problem. If you don't, then what are we arguing about? I'm just saying that the whole advantage of going digital is that noise inherent in digital systems is insignificant compared to the kind of noise you get with analogue. So the idea of running a signal into a soundcard thats so hot its about to clip is kind of pointless and its not going to get you any more appreciable signal relative to noise floor. Not when the noise floor of most prosumer ADCs is still less than -100dB. Basically everything you run into an analogue input on your soundcard is going to have more noise. And its even more pointless because you need to trim your levels anyway so have some headroom to mix with. WanderingKid fucked around with this message at 20:42 on Jul 5, 2010 |
# ? Jul 5, 2010 20:40 |
|
WanderingKid posted:I get where you are coming from and it looks like it makes sense but its actually insane. WanderingKid posted:I'm just saying that the whole advantage of going digital is that noise inherent in digital systems is insignificant compared to the kind of noise you get with analogue. So the idea of running a signal into a soundcard thats so hot its about to clip is kind of pointless and its not going to get you any more appreciable signal relative to noise floor. Not when the noise floor of most prosumer ADCs is still less than -100dB. Basically everything you run into an analogue input on your soundcard is going to have more noise. So let's say you have a usable -80dB digital noise floor. Let's say you recorded 16 tracks with an average level of -18dBfs, 62dB to the digital noisefloor on each one. Well, since the noise floor is the same noise on each track, it will sum most similarly to identical signals, so when we mix all these tracks together with their faders at 0, we add 6dB of noise every time the trackcount doubles, which adds 24dB more in the case of 16 channels. So the final mix, once its brought up to commercial CD levels has a noise floor at -56dB. Let's add a little reference: vinyl surface noise is about -50dB. In the above you only recorded each track with 12dB less noise than a vinyl on playback. When you summed all the tracks together and mastered the album, you approached vinyl's playback hiss in terms of noisefloor. Are CD rereleases of old LPs generally quieter than playing back the vinyl? If so, I guess those analog setups they were originally recorded on have a noticeably lower noise floor than many modern digital recording setups.
|
# ? Jul 5, 2010 23:52 |
|
Pretty much everything on the issue that needs to be said already has been here. That forum is mostly full of headcases but that thread gets the point across fairly well and many of the people doing the talking are not wing nuts (for a change). For some more on noise in digital systems check out some of Dan Lavry's posts on the issue. He also posted a bunch of important stuff about AD noise floors here. Heres some more from prosoundweb with some contributions from John Hodgson (the guy who wrote the G-Force plugins). I'm not an expert on the subject since I'm just a dude who makes music. Its clear I'm not explaining the concepts very well so its probably better if you hear it straight from the horse's mouth. WanderingKid fucked around with this message at 13:39 on Jul 6, 2010 |
# ? Jul 6, 2010 13:31 |
|
If someone could render the threads linked by WK into intelligible English, that would be great; pretty sure that stuff isn't intended for audiences of the experience level of the average ML poster. Or, certainly not for my level of experience
|
# ? Jul 6, 2010 14:59 |
|
wixard posted:It doesn't take into account the 10-12ns of jitter you probably have with a prosumer interface, robbing you of another 10dB of headroom. Can you explain this? These two concepts don't seem related at all (at least not intuitively).
|
# ? Jul 6, 2010 15:00 |
|
Rivensbitch et all aren't doing anything that is inherently wrong. Theres nothing wrong with recording as hot as they can and trimming the level before it hits a soundcard input or desk. The guy in the first link mentioned how he works pro tools sessions all the time where all the channels are recorded hot as hell and when he wants to spread them out on an SSL, he has to trim all the inputs because all his VU meters are pegged. You can do that if you have to. But you aren't lowering SNR by any degree that matters by simply recording -12 to -20dB below full scale to begin with and you don't have to trim later. And all your hardware sounds better because it was designed to operate around 0 VU (volume units) anyway. Above 0 VU you have quite alot of headroom before the signal turns to poo poo. In digital systems, the signal turns to poo poo precisely at 0dB full scale. So what most people are doing nowadays is calibrating their meters and gain structuring so that they have some headroom in their DAW before they clip. It doesn't matter whether you do it pre recording or post recording. It only matters that you do it otherwise how the hell are you gonna mix with no headroom? Thats not the point of contention however and I'm pretty sure we all agree on these points. The point of contention arises from this idea that recording super hot gives you higher signal to noise ratio and thats not true because the noise introduced from the digital part of the system - the converters is insignificant next to noise generated from the electronics in whatever box you are recording. If you are getting audible 60hz hum and you record quiet and then later on try to squeeze +20dB out of it in your DAW then yeah of course you are going to get +20dB more 60hz hum. Thats not a case for recording stupidly hot to begin with. Thats a case for trying to kill a ground loop which is causing an inordinate amount of 60hz hum. If you pump volume like a motherfucker then you will notice more noisy byproducts of your gear. Hell if you pump volume like a motherfucker it is possible to hear quantization noise in a 16 bit recording under very specific conditions and that level of noise is almost insignificant to begin with. Follow the izotope dither guide if you want to check it out yourself but it quickly becomes apparent that nobody actually listens to music like that and you won't be able to resolve quantisation noise from more significant sources of broadband noise or overwhelming signal. WanderingKid fucked around with this message at 16:21 on Jul 6, 2010 |
# ? Jul 6, 2010 16:09 |
|
This argument is getting pretty silly. For what it's worth, even if the tracks aren't recorded that hot I always bring all my faders down to -9 to -12dB when setting up a mix because headroom rules and there's nothing more painful than watching a green engineer push his faders to their limit and have no more to give. I once attended an Ableton Live demo where the presenter was bitching and moaning about how Ableton's mix bus distorts and can't handle this and that and the entire time I'm watching him mix with all of his faders at 0 and all of his channel meters tickling red, and his master bus absolutely red red red. I just shook my head and went back to the bar. Kids these days.
|
# ? Jul 6, 2010 16:42 |
|
I used to do that too. When faced with DAW meters though your natural inclination is to go with green = go, yellow = brace yourself and red = stop. So noobs naturally find themselves mixing to the red line and squashing the poo poo out of everything with compressors because they got no headroom. You gotta create it somewhere I suppose. Shame it tooks years before us bedroom warriors got the message from some guy who works an SSL all day long. You mean, you have ~20dB headroom over the STOP sign on your meters? Where you been all my life baby?
|
# ? Jul 6, 2010 17:22 |
|
The dBfs meter is flawed when it comes to recording. Like Wandering Kid said you have to create your own headroom but who in their right mind knows this when they're starting out? I certainly didn't. I started with a 4 track cassette recorder that sounded really cool when you pushed it hard so when I got a computer that was capable of recording I just naturally thought that 0 was where I was supposed to go. The creators of DAW software just assume that everyone knows that you should create this headroom on your own I guess. Even though I'm sure 75% of their client base are bedroom artists that don't know anything about gain staging.
|
# ? Jul 6, 2010 18:24 |
|
I'm going to actually find some time to record some test tones at different levels and then normalize them and flip 180 degrees to really get down to what kind of noise might be introduced at the ADC. If WanderingKid and Hog are right, there shouldn't be any appreciable noise difference when recording at 0dB vs -20dB.
|
# ? Jul 6, 2010 18:57 |
|
WanderingKid posted:Shame it tooks years before us bedroom warriors got the message from some guy who works an SSL all day long. http://www.gearslutz.com/board/so-much-gear-so-little-time/463010-reason-most-itb-mixes-don-t-sound-good-analog-mixes-restored.html vvv edit: well gently caress Laserjet 4P fucked around with this message at 22:46 on Jul 6, 2010 |
# ? Jul 6, 2010 20:31 |
|
RivensBitch posted:I'm going to actually find some time to record some test tones at different levels and then normalize them and flip 180 degrees to really get down to what kind of noise might be introduced at the ADC. If WanderingKid and Hog are right, there shouldn't be any appreciable noise difference when recording at 0dB vs -20dB. Most of that has already been worked out for you. TC publish the noise figures for the ADC and DAC stage if you want to go have a look. Emu claims their 0404 has 111dB SNR on the ADC side. That pretty insane. Its just impossible for any analog signal generator or signal processor to produce less noise so its unlikely that the converter stage is going to be the weak link in the chain so to speak unless you digitally create all your sounds and mix it all in a computer. Your preamps will put out alot more noise than your ADC. Most soundcards these days have pretty ridiculous noise figures. Yoozer posted:http://www.gearslutz.com/board/so-m...s-restored.html Beat you. :O
|
# ? Jul 6, 2010 22:20 |
|
Schlieren posted:If someone could render the threads linked by WK into intelligible English, that would be great; pretty sure that stuff isn't intended for audiences of the experience level of the average ML poster. Or, certainly not for my level of experience
|
# ? Jul 7, 2010 01:45 |
|
WanderingKid posted:Pretty much everything on the issue that needs to be said already has been here. That forum is mostly full of headcases but that thread gets the point across fairly well and many of the people doing the talking are not wing nuts (for a change). If someone posts some nonsense bullshit they usually get called out pretty quickly too!
|
# ? Jul 7, 2010 01:47 |
|
WanderingKid posted:Most of that has already been worked out for you. Well where's the fun of reading someone else's conclusions, folding your arms and convincing yourself you're right without even trying it out for yourself? If you're right, then a simple experiment should yield a null waveform. Record a test tone whose peaks are at 0dB in your DAW. Record the same test tone where the peaks are at -20dB in the DAW. Then normalize the second waveform to 0dB. By your argument, they should sum to 0dB when one waveform is inverted. If not, then we can conclude that ONE of the levels is producing more noise than the other. If you feel up to it, why don't you try this experiment with me and we can compare our results? I'll try to post my waveforms/results later tonight if time permits. After all if we're just going to reference gearsluts and call the argument settled, then what's the point of this thread? I'd much rather we engage in some group and try to come to some conclusions of our own, wouldn't you? RivensBitch fucked around with this message at 04:11 on Jul 7, 2010 |
# ? Jul 7, 2010 04:09 |
|
I'd love to see the results!
|
# ? Jul 7, 2010 05:08 |
|
I generated a sine wave test tone on my desktop PC, sampled it internally within Ableton (to ensure consistency) and looped it so that it was alternating with a short section of sine wave, and a short section of silence. I then took the output of my M-Audio interface on the desktop and connected it to a preamp input on my TC Konnekt 48 connected to my Macbook Pro. My first pass was recorded at about -1.9dB according to the spectrometer in live. I verified that the waveform was not distorted. My second pass was about -17.5dB below that figure. I then lined up the two waveforms on their own tracks such that the sharp transients were exactly aligned, and added utility plugins to both channels. I engaged the DC offset correction, and inverted the waveform on the second channel. I then boosted the second, quieter waveform until I achieved the maximum phase cancellation, and sampled the result. Here are the RAW wave files: http://eric.frap.net/sa/ml/raw%20waves.zip Here is the Ableton Session (latest version of 8 with a max 4 live patch to fine tune summing levels down to .00001 of a dB): http://eric.frap.net/sa/ml/ML%20gain%20experiment.zip I could not get the waveforms to cancel no matter what I tried. I had to create a max for live patch to allow me to adjust the gains down to a ten thousandth of a dB, and still I couldn't get them to phase cancel. There was still sine wave present at about -40dB. So who wants to take a guess as to what is going on here? Perhaps someone else could repeat the experiment to verify I'm not doing something wrong? Is it possible that noise and jitter introduced by recording at -20dB and boosting it to 0dB is causing the phase cancellation? And before anyone asks, the DC offset correction increased the phase cancellation by an extra 10dB so I wasn't cheating there. RivensBitch fucked around with this message at 09:45 on Jul 7, 2010 |
# ? Jul 7, 2010 09:42 |
|
This conversation is completely blowing my mind
|
# ? Jul 7, 2010 10:33 |
|
Isn't this just testing how noisy and non linear preamps are at different amounts of gain? I mean I'm just a dude with a couple of synthesizers and a guitar. I don't even have instruments sensitive enough to measure the noise of converter and preamp stages nor do I have the electronics know how to do it properly. The only spectrum analyser I have is SPAN and that only goes down to -72dB. I just know what engineers teach me for free in their own time and one of the fundamental basics is that digital systems are extremely intolerant of noise and interference. Much more so than analog systems. I mean if you can explain how this is supposed to work and what its attempting to prove I'll help any which way I can but yeah. I'm just a guy with a guitar. Thats why I just linked to posts by Jon Hodgson and Dan Lavry because evidently I'm not paraphrasing them very well.
|
# ? Jul 7, 2010 11:04 |
|
I'm gonna try to sum up my position in way that makes sense from a musician's perspective. when you work on a nice desk with a bunch of VU meters theres a red line at 0 VU and if you tick that its not the end of the world because depending on the desk, you got maybe up to +25dB above that before your signal turns to dogshit. In your DAW you tick the red line and you are fine but if you go over it you will clip which is the point at which your signal instantly turns to dogshit. I understand why DAWs have full scale peaking meters instead of the old style VU averaging meters because in digital audio you can't use an averaging meter where there is no overload margin. Its either overloading (clipping) or its not. If you use averaging meters to mix, your meters won't tell you when you clip. So on a nice analog console you got a good deal of headroom (maybe about 20dB) above 0 VU to mix with. In a DAW mixer you can have tonnes of headroom (potentially alot more than 20dB) but its under 0dB full scale. I guess what people are doing now with digital mixing is to make their DAW mixer a bit like a kick rear end desk. You take 20dB off the top of your peaking meters and call -20dB full scale the equivalent of 0 VU. Theres obvious differences between averaging and peaking so its not the same and it wont get the same results but you are worried more about clips in digital audio so it makes sense. The second bit is about gain structure. So if you distort the ever living gently caress out of your guitar and then record it into your soundcard, its going to sound distorted no matter how much you pull down the channel gain. Thats obvious right? If you get your mic and drive your preamp super hard so its hissing all over the place, thats gonna show in the recording. so if you want a super hot, distorted sound thats fine and you know what to do. You drive your amp into its overload margin (saturation) or beyond it and you record it that way. When you mix it in Cubase or whatever you can trim your channel inputs so you have your 20dB of headroom and you can mix it easy. Thats fine. But why would you record everything as hot as it can possibly go? The reason most people use is that you get more signal to noise ratio or that you need to use all your 24 'bits' which is an idea that I don't buy, based on the information I'm getting from Paul Frindle and others. WanderingKid fucked around with this message at 12:03 on Jul 7, 2010 |
# ? Jul 7, 2010 11:45 |
|
WanderingKid: For someone who has posted a lot of on this subject, I'm really disappointed that you're now going to claim ignorance and just hide behind links to other people's opinion on the subject. I find it really surprising that you can spend so much time posting about this but couldn't find time to try the experiment yourself and see what results you could come up with. I'm wondering if you even took the time to load the wav files from my experiment into your DAW... Did you notice anything interesting about the silent sections between the sine waveforms? As I said earlier in the thread, where's the fun of reading someone else's conclusions, folding your arms and convincing yourself you're right without even trying it out for yourself? It seems you're content with that, and that's fine I guess but it seems kind of disingenuous. So does the "from a musician's point of view" argument you bring up, we're all musicians here so I don't really get your point unless you're trying to imply that this subject is some kind of rocket science that's beyond our ability to understand, so we might as well just read your links and accept what's said as gospel.
|
# ? Jul 7, 2010 16:22 |
|
I'm not asking you to accept anything as gospel. Read it or don't. Take what you can away from it. Or don't. I've already said I'd listen to the wavs when I get home because I don't have speakers on my work computer. Reserving judgement on the test until I can hear it.
WanderingKid fucked around with this message at 16:56 on Jul 7, 2010 |
# ? Jul 7, 2010 16:29 |
|
I'm no expert engineer but this seems like an interesting experiment to do, I'd like to see how it changes on different interfaces. From RivenBitch's explanation I get the basics of what I'm supposed to do, but it would be nice if you could post a slightly more detailed set of instructions to duplicate this experiment. For instance, I don't know what DC Offset Correction even is! I have Cubase 4 and Ableton Live 8, so if anyone wants to assist a little I'll try this too.
|
# ? Jul 8, 2010 03:14 |
|
mike- posted:I'm no expert engineer but this seems like an interesting experiment to do, I'd like to see how it changes on different interfaces. Download my ableton set and the plugins will be loaded already, it will also have the waveforms loaded so you can see what I did.
|
# ? Jul 8, 2010 03:47 |
|
the wizards beard posted:Can you explain this? These two concepts don't seem related at all (at least not intuitively). It's a recreation of a 20KHz tone from a DAC with no jitter suppression on its wordclock input. The blue one is using its internal clock, which is apparently very stable, and the pink one is it chasing a wordclock signal with 25ns peak jitter applied to it. The noisefloor is raised about 50dB. I admit I guessed at how the noise floor would increase with 10-12ns of jitter, but jitter = noise. As far as I know, no one is a real expert on jitter, it is too hard to measure consistently. People like Bob Katz have a lot of time and effort spent trying to hear it and track it down and with their own critical listening tests they conclude things that are often counterintuitive to the way most of us think about digital audio. For instance, he asserts here (most of the way down under "Can Compact Discs contain jitter?") that he can hear the difference between a standalone professional CD record deck taking AES inputs and a SCSI-based CD recorder writing data from a hard drive. He also asserts that if you have a jittery CD or DAT copy of your music you can write it to hard disk, burn it with that SCSI CDR drive and fix it, creating a dub that sounds better than the original. Julian Dunn is another guy to look up on the nuts and bolts of jitter if you're interested, I think he has a paper that attempts to quantify what amount of jitter is audible that I see referenced a lot but have never read. I only know in my practical experience with using digital snakes and sending their output through thousands of watts of gain in PA systems, the noise floor between well-clocked and poorly clocked digital rigs is very noticeable, even on setups that cost 5 figures for 12 or 16 channels, and unfortunately figuring out what the best clocking scheme is to reduce it isn't always as easy as buying a $2000 master clock and feeding wordclock everywhere. WanderingKid posted:Pretty much everything on the issue that needs to be said already has been here. That forum is mostly full of headcases but that thread gets the point across fairly well and many of the people doing the talking are not wing nuts (for a change). The problem with your argument is if you follow your logic to its conclusion, you're saying that when you record with an SSL console, which your linked thread stated has a noisefloor 75dB from clipping, you will hear no degradation tracking to your final mix to your Emu 0404 until you get to -36dBFS, 75dB above the published noisefloor of the interface. I can't agree that is true in practice. Hogscraper posted:Something I didn't mention in my blog post that I'll mention here is most of these plugins that emulate old gear like an LA2A, 1176, Pultec, etc are all calibrated to work better with a lower peak level. Check the manuals that come with each plugin to find out what their ideal level is. I think the Waves and UAD emulations are -18 dBfs. You've said multiple times dBFS is flawed but you have yet to provide a scenario where maximizing dBFS doesn't also maximize your RMS and signal to noise. You asserted several pages ago and again on this page that ADC's have a nominal input gain below 0dBFS above which they react worse to signal. I haven't seen you actually support any of your assertions logically, you just quote manuals out of context. 0 dBu means literally nothing if you are mixing entirely in the box and printing a digital master. Manufacturers publish where 0dBu hits their ADCs in dBFS so that you can calibrate them to your existing analog gear, not because you should. All you are doing is telling people how to calibrate meters so they can record with the same amount of headroom they used to with analog gear, and I'm asserting that headroom is better spent driving the ADC in very many digital setups. Find a post by Bob Katz or Dan Lavry where they actually say their ADC sounds worse when they hit it harder, not where they say if you want to mix ITB with the same gain structure as an analog desk, you need to record around -18dBFS. You realize the logical conclusion of your assertion is that by mastering a CD at the levels you usually do, you are running the DAC on everyone's playback devices way past its point of peak performance and making them sound worse, right? RivensBitch - My analysis rig is tied up on the back of a truck until the weekend, bouncing from gig to gig. I will run this experiment this weekend (Sunday most likely, if another gig doesn't pop up) and I'll also happily see what FFT analysis in Smaart 7 can tell us exactly about the differences in your results and mine (and anybody else's).
|
# ? Jul 8, 2010 16:07 |
|
wixard posted:You realize the logical conclusion of your assertion is that by mastering a CD at the levels you usually do, you are running the DAC on everyone's playback devices way past its point of peak performance and making them sound worse, right? I just assume if some designer of an ADC/DAC system is kind enough to put an optimal 0 dBu point in the manual I follow it. I usually am trimming the gain up or down before mixing after that point anyway so tracking low to save an extra step isn't my argument for it. To under stand my view is to understand saturation plugins. The URS plug I have is extremely subtle and you can just infinitely barely hear it working but if you put it across 32 tracks and bypass them all simultaneously it's mind blowing. Little oddities definitely do add up. Some are sonically pleasant and others detrimental to the sound you're trying to achieve. I try to hit the 0 dBu point because that's what the engineer who designed the piece of gear I'm using says is the cleanest/clearest point. More than that results in some distortion no matter how minutely audible. Less than that results in extraneous noise. It's kind of a dumb argument when I can't hear the distortion on a $4,000 playback system. That, or I'm hearing it all the time so I can't a/b the source and hear it anyway. I know. It's theoretical cosmic stuff I'm apparently worrying about. Hogscraper fucked around with this message at 18:15 on Jul 8, 2010 |
# ? Jul 8, 2010 17:45 |
|
wixard posted:Here's an image from the tech paper RivensBitch posted describing TC's new jitter reduction technology. This is literally the most extreme example they could pick and that image is misleading, the text mentions symmetric noise skirts around the peak but they've cropped things to make it look like white noise. "Noise floor" to me suggests the background noise across the entire spectrum, not a specific band. I'll take your word take it is a significant noise source but that paper reads more like advertising than a peer-reviewed scientific article.
|
# ? Jul 8, 2010 18:39 |
|
Sorry to interrupt, but I recently got a Desktop Konnekt 6. I've got my latency up all the way and I'm still getting skipping in my audio. The ASIO buffer is at 8192 samples, latency at 186.2 ms. I'm baffled because this didn't happen with my Fast Track Pro with lower latency settings. I've got everything up to date as far as software and firmware for the thing. This is just on basic playback on iTunes even.
|
# ? Jul 8, 2010 21:27 |
|
Could be your firewire chipset causing an issue. Have you tried turning on safe mode in the TC control panel? Start with Safe Mode 3 and turn your latency down and see what you come up with.
|
# ? Jul 8, 2010 21:44 |
|
I'm in Safe Mode 3 already, I had to switch to Legacy drivers because I kept getting BSODs -- but even before that switch I still had this.
|
# ? Jul 8, 2010 21:58 |
|
No. 9 posted:I'm in Safe Mode 3 already, I had to switch to Legacy drivers because I kept getting BSODs -- but even before that switch I still had this. I can run my 24D with about a 6 ms roundtrip latency. This is on a Mac, but I don't see why you couldn't hit 20-30 ms on Windows without any problems.
|
# ? Jul 9, 2010 02:42 |
|
So, I feel dumb asking but I can't find the beta drivers. The sticky on the top of the forum just links the release page for 2.4.1 which I have.
No. 9 fucked around with this message at 02:57 on Jul 9, 2010 |
# ? Jul 9, 2010 02:52 |
|
No. 9 posted:So, I feel dumb asking but I can't find the beta drivers. The sticky on the top of the forum just links the release page for 2.4.1 which I have.
|
# ? Jul 9, 2010 03:51 |
|
Hogscraper posted:I try to hit the 0 dBu point because that's what the engineer who designed the piece of gear I'm using says is the cleanest/clearest point. More than that results in some distortion no matter how minutely audible. Less than that results in extraneous noise. the wizards beard posted:This is literally the most extreme example they could pick and that image is misleading, the text mentions symmetric noise skirts around the peak but they've cropped things to make it look like white noise. "Noise floor" to me suggests the background noise across the entire spectrum, not a specific band.
|
# ? Jul 9, 2010 16:06 |
|
I had to go bag the Wavelab 5 demo and since it was a quiet day at work, I toyed around with it at the office. No speakers again but all I'm seeing is noise floor that scales with peak signal. You have about 12 bits of useful signal and the rest is noise on both mono wavs. The summed wav is stereo for some reason which I don't know what to make of yet. I suspect the summed file is botched because the residue impulse gets higher in amplitude the further into the recording which may indicate that you didn't line the two mono wavs up to the exact sample? I still don't understand what this test is meant to prove. That if you add +17.5dB of gain you get (surprise) +17.5dB of gain?
|
# ? Jul 9, 2010 17:08 |
|
The waveforms were synced down to the sample. The summed wav is stereo because it's taken from the master bus. What does this experiment prove? Well it certainly DOESN'T prove that there is no difference in signal when recording hot vs -20dB from hot. How would you construct it different to prove that there is no advantage to recording as hot as possible?
|
# ? Jul 9, 2010 19:03 |
|
|
# ? May 28, 2024 01:02 |
|
But there is no advantage in recording as hot as possible so thats a loaded question. The idea of recording super hot comes from the 16 bit days and is a defunct habit that some people still do even though there is no logical basis for it. Explanations for this are given by Paul Frindle and Skip Burrows here and here. I've quoted the most relevant passages below. (Skip's spelling is rather atrocious and is copy pasted as is.)Skip Burrows posted:
Paul Frindle posted:Ok one final word about converters... Theres a little bit more with Bob Katz chiming in here. I admit reading through the entire thread is a bit much so I'll just highlight the most relevant posts addressing this issue. But back to your example. I can't get them to null but they are very very slightly different lengths. The first impulse is 1 sample longer than the other and I have no idea why. Getting these files to null is completely tangential to what we are talking about in either case. WanderingKid fucked around with this message at 20:29 on Jul 9, 2010 |
# ? Jul 9, 2010 20:25 |