Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
kyojin
Jun 15, 2005

I MASHED THE KEYS AND LOOK WHAT I MADE

Rexz posted:

The idea that in a few years we'll be running our file servers off spare Sandy Bridge stuff is fantastical - and completely true!

I'm looking at buying a 2500 purely for a file and VM server. I want to run 6-8 low intensity servers on there - a DC, SQL, SAbnzbd, fileserver, mythTV (hence not the K - I want VT-d) etc. Is this realistic or is performance going to be too poor with that many machines? 8GB RAM and I'll be using an SSD for the OS to run from so hopefully drive performance should not be a bottleneck. Keen to use as little power as possible when nothing much is going on too.

I'm in no hurry, since I will need all those SATA ports so I need to wait until that is all fixed. Is Bulldozer likely to be a better option? Not surprisingly, running a VM host is not a part of the benchmarking for most sites so I am largely guessing based on multithreading performance..

Any pointers?

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

FunkyUnderpants posted:

Quick question: does anyone know if the Z68 chipset is going to be able to support dual pci express x16 slots instead of crippling them both to x8? I recall that nVidia chipsets a few revisions back could do this on their premium boards, but I've also heard that some suit between nVidia and Intel prevent nVidia from doing the same for us this time around.

Sucks. I really wanted good SLI performance from my two GTX 460 cards.

If not the Z68, does anyone know if there's ever going to be any chipset that can support that? I was under the impression, though, that Intel and nVidia were the only actual players in the not-just-southbridges industry.

Z68 will just allow you to have the FDI* link between PCH and CPU (thus enabling integrated graphics) whilst also you to overclock like a madman.

There are 24 PCI-Express 2.0 lanes available on a Sandy Bridge platform. 16 from the processor, 8 from the chipset. Processor can either offer 1 x16 or 2 x8. Chipset can offer 8 lanes, grouped "logically" in ports 0-3 and 4-7. Each of those groupings can become a x4 link. Most of those lanes disappear to onboard peripherals such as PCIe<->PCI bridges, USB 3.0, additional SATA controllers, etc.

I don't foresee the CPU gaining any more lanes until a full redesign; it's more feasible to get a new PCH that has a larger PCIe Root Complex. I'd again argue it's unlikely that a new PCH with more lanes will help for graphics, because again, it'll be another grouping of 4 lanes, so you still wouldn't gain an additional x8 or x16 link.

PCIe packet switches (aka bridges) can throw more lanes out there, but you're still constrained by the bandwidth of the upstream port.

quote:

I'm looking at buying a 2500 purely for a file and VM server. I want to run 6-8 low intensity servers on there - a DC, SQL, SAbnzbd, fileserver, mythTV (hence not the K - I want VT-d) etc. Is this realistic or is performance going to be too poor with that many machines? 8GB RAM and I'll be using an SSD for the OS to run from so hopefully drive performance should not be a bottleneck. Keen to use as little power as possible when nothing much is going on too.

I'm in no hurry, since I will need all those SATA ports so I need to wait until that is all fixed. Is Bulldozer likely to be a better option? Not surprisingly, running a VM host is not a part of the benchmarking for most sites so I am largely guessing based on multithreading performance..

Any pointers?

6-8 unique VMs, or just 6-8 services on a single (or two) VMs? You will care most about # of threads and amount of RAM. MythTV sounds like the most CPU-hungry task you've listed (I assume your SQL server would be MSSQL or MySQL in a development role, not a production role). A 2600 (4 cores w/ HT) plus 16GB of RAM on the ASRock P67 board would be pretty cheap and powerful. Could try and find a nice H67 board as well (or Z68) so you don't have to waste watts driving a dedicated video card. (Or just buy a ATI Rage XL PCI card from your local shop for $5).

I've blown my wad on drives recently, and find my E6600 faltering under heavy load doing fileserving, so I don't have money to upgrade, but that's what I'd do if I could.

*FDI - Flexible Display Interface. IIRC, it is a PCI-Express style link (electrically and physically anyways, an AC-coupled differential transmission line) that pipes GPU data from the CPU to the PCH, where the PCH can output a variety of signals/combinations. Last generation Ibex Peak (5-Series) could do 2 outputs at most, but could pick from VGA, HDMI, DisplayPort and LVDS.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

brainwrinkle posted:

What is the general consensus on load line calibration when overclocking Sandy Bridge? I've heard in a few places that it causes very short voltage spikes on the higher settings and the highest setting can increase the voltage under load. I've got my 2500k stable at 1.35V and medium (Level 3 in the ASRock BIOS) load line calibration. Would it be worth increasing the LLC to drop the Vcore a bit? Do any of the other voltages really matter for a ~4.5 Ghz overclock?
Never use loadline calibration, it will cause voltage overshoots when exiting load conditions that can damage the processor (or at the very least cause hangs or crashes). If the voltage isn't high enough under load but is within safe limits when idle, just increase the voltage setting.

FunkyUnderpants posted:

Quick question: does anyone know if the Z68 chipset is going to be able to support dual pci express x16 slots instead of crippling them both to x8? I recall that nVidia chipsets a few revisions back could do this on their premium boards, but I've also heard that some suit between nVidia and Intel prevent nVidia from doing the same for us this time around.

Sucks. I really wanted good SLI performance from my two GTX 460 cards.

If not the Z68, does anyone know if there's ever going to be any chipset that can support that? I was under the impression, though, that Intel and nVidia were the only actual players in the not-just-southbridges industry.
This was largely answered, but the higher-end Sandy Bridge boards will include nVidia NF200 bridge chips, which take 16 lanes from the CPU and provide 32 lanes out to devices. Your cards are still limited to a combined total of 16 lanes of bandwidth, but it has various tricks to improve the effective bandwidth beyond what you get with x8/x8. However, as mentioned, there's minimal performance difference in the real world because 8 PCI-E 2.0 lanes are enough for anybody.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

kyojin posted:

I'm looking at buying a 2500 purely for a file and VM server. I want to run 6-8 low intensity servers on there - a DC, SQL, SAbnzbd, fileserver, mythTV (hence not the K - I want VT-d) etc. Is this realistic or is performance going to be too poor with that many machines? 8GB RAM and I'll be using an SSD for the OS to run from so hopefully drive performance should not be a bottleneck. Keen to use as little power as possible when nothing much is going on too.

We have an older dual-quad at work with 12GB memory (~7GB used), runs about 10 LInux and Windows server VM's of various duties, and I'm probably only using ~ 1500MHz of CPU most of the time. I could probably run 30 more if I had another 12GB of RAM.

You should be fine.

Longinus00
Dec 29, 2005
Ur-Quan

kyojin posted:

I'm looking at buying a 2500 purely for a file and VM server. I want to run 6-8 low intensity servers on there - a DC, SQL, SAbnzbd, fileserver, mythTV (hence not the K - I want VT-d) etc. Is this realistic or is performance going to be too poor with that many machines? 8GB RAM and I'll be using an SSD for the OS to run from so hopefully drive performance should not be a bottleneck. Keen to use as little power as possible when nothing much is going on too.

I'm in no hurry, since I will need all those SATA ports so I need to wait until that is all fixed. Is Bulldozer likely to be a better option? Not surprisingly, running a VM host is not a part of the benchmarking for most sites so I am largely guessing based on multithreading performance..

Any pointers?

The answer to your question depends entirely on what load these servers are going to deal with. The maximum number of VMs you can run on any computer is more ram limited than anything else.

kyojin
Jun 15, 2005

I MASHED THE KEYS AND LOOK WHAT I MADE

movax posted:

6-8 unique VMs, or just 6-8 services on a single (or two) VMs?

Unique VMs if possible. My file/sql/etc server has just taken a dump all over itself and I want to start putting services into silos. I know it's a bit unnecessary, but I need to upgrade anyway and playing with multiple VMs will be interesting fun.

I looked at the H67, but the P67 boards have more SATA ports - possibly because the onchip gpu is disabled? Not sure if that using PCIE lanes or not so I am not sure how that will be on the Z68 boards and as you say, graphics cards are cheap. My plan was to start with 2x4GB and see how that goes - an sql server with maybe 3 simultaneous users at most should be alright with 512MB for instance, and myth will be capturing digital from DVB-T/S so that shouldn't need much grunt either.

Thanks for the tips - I'm aiming to go for minimal linux builds as much as possible, and users are just me and the girlfriend and our 5 xbmcs so my instinct is that this should be pretty viable.

Bob Morales posted:

You should be fine.

Swish, sounds like your setup probably sees a lot more use than mine would. Cheers

kyojin fucked around with this message at 22:42 on Feb 7, 2011

brainwrinkle
Oct 18, 2009

What's going on in here?
Buglord

Alereon posted:

Never use loadline calibration, it will cause voltage overshoots when exiting load conditions that can damage the processor (or at the very least cause hangs or crashes). If the voltage isn't high enough under load but is within safe limits when idle, just increase the voltage setting.

Do you have a source for this? I tried it with LLC turned completely off and I drop about .07v under load. Is that normal? It seems my 2500K is stable at 4.4 Ghz at 1.32v under load, which would mean I would need a 1.39v Vcore with no LLC versus 1.35v with medium LLC. That seems a bit high.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

brainwrinkle posted:

Do you have a source for this? I tried it with LLC turned completely off and I drop about .07v under load. Is that normal? It seems my 2500K is stable at 4.4 Ghz at 1.32v under load, which would mean I would need a 1.39v Vcore with no LLC versus 1.35v with medium LLC. That seems a bit high.
Here's an Anandtech article about CPU power delivery and Loadline Calibration. It was written for the 45nm Core 2 Quads, though the principles are the same (and 32nm CPUs will be even more sensitive to voltage transients). A .07v drop under load doesn't seem abnormal, mine can be up to .10v on my Penryn system, though the voltages are also higher proportionally (and that's when pushing my CPU past the point where current starts getting retarded).

brainwrinkle
Oct 18, 2009

What's going on in here?
Buglord

Alereon posted:

Here's an Anandtech article about CPU power delivery and Loadline Calibration. It was written for the 45nm Core 2 Quads, though the principles are the same (and 32nm CPUs will be even more sensitive to voltage transients). A .07v drop under load doesn't seem abnormal, mine can be up to .10v on my Penryn system, though the voltages are also higher proportionally (and that's when pushing my CPU past the point where current starts getting retarded).

Wow, interesting. Thanks for the information! I'll definitely turn LLC off then. Do you think it would be safe to set the Vcore to 1.39 at idle? Is the "safe" Vcore measured at load or idle? I know 1.35v-ish is considered safe for Sandy Bridge. Is there any way to have the idle voltage drop with Speedstep like my Q6600 did?

movax
Aug 30, 2008

kyojin posted:

Unique VMs if possible. My file/sql/etc server has just taken a dump all over itself and I want to start putting services into silos. I know it's a bit unnecessary, but I need to upgrade anyway and playing with multiple VMs will be interesting fun.
Definitely go for the 2600 then. More threads presented to the host, and combine with that >=16GB of RAM.

quote:

I looked at the H67, but the P67 boards have more SATA ports - possibly because the onchip gpu is disabled? Not sure if that using PCIE lanes or not so I am not sure how that will be on the Z68 boards and as you say, graphics cards are cheap. My plan was to start with 2x4GB and see how that goes - an sql server with maybe 3 simultaneous users at most should be alright with 512MB for instance, and myth will be capturing digital from DVB-T/S so that shouldn't need much grunt either.
AFAIK the P67 and H67 should have the same number of SATA ports via PCH. Those extra ports you're seeing are from extra SATA controllers (like Marvell 6Gbps or JMicro/Silicon Image eSATA) that are often added to P67 boards. The PCH should toss out 6 ports, the extras are just that - extra.

brainwrinkle
Oct 18, 2009

What's going on in here?
Buglord

brainwrinkle posted:

Wow, interesting. Thanks for the information! I'll definitely turn LLC off then. Do you think it would be safe to set the Vcore to 1.39 at idle? Is the "safe" Vcore measured at load or idle? I know 1.35v-ish is considered safe for Sandy Bridge. Is there any way to have the idle voltage drop with Speedstep like my Q6600 did?

If anyone else has an ASRock P67 board, I figured out that using offset voltage settings allows the processor to undervolt while idle. It also seems to have better stability and voltage drop characteristics under load with LLC disabled so far, so I'd highly recommend using offset voltage over static. Thanks again for the help Alereon. I'll take the rest to the Overclocking megathread.

Smudgie Buggler
Feb 27, 2005

SET PHASERS TO "GRINDING TEDIUM"
Intel are resuming shipping faulty chipsets to manufacturers whose products won't use the 3GB/sec ports. What's the bet Apple threw a loving hissy fit about Intel's fuckup and threatened all kinds of poo poo if they don't let them announce and ship the SB MBPs and iMacs everybody's expecting?

Shmoogy
Mar 21, 2007

Cwapface posted:

Intel are resuming shipping faulty chipsets to manufacturers whose products won't use the 3GB/sec ports. What's the bet Apple threw a loving hissy fit about Intel's fuckup and threatened all kinds of poo poo if they don't let them announce and ship the SB MBPs and iMacs everybody's expecting?

I was about to post and suggest that it was probably Apple that was able to persuade/force them to send them the boards.

Smudgie Buggler
Feb 27, 2005

SET PHASERS TO "GRINDING TEDIUM"
Yeah, I mean, Apple could probably give two shits about faulty 3GB/sec ports. How many of their products are even capable of using more tha two SATA ports? One, the Mac Pro, which probably weren't going to be given Sandy Bridge architecture in the next six months anyway.

frumpsnake
Jan 30, 2001

The sad part is, he wasn't always evil.
The 27" iMacs have 3 SATA ports, for SSD+HDD+Optical configurations.

brap
Aug 23, 2004

Grimey Drawer
they might all be custom, though, because they use custom temperature sensors on their HDDs.

movax
Aug 30, 2008

fleshweasel posted:

they might all be custom, though, because they use custom temperature sensors on their HDDs.

The ports? I was under the impression it is a combination of custom drive firmware that takes advantage of auxiliary/rarely used sections of the SATA specification.

The good news though, is that for some reason if you absolutely don't want to RMA your board (or can't), Intel is implicitly assuring us all that the failure will stop at the 3Gb/s ports, and that no other sections of the PCH are known to be faulty/degradable. So I'd be more confident than ever in trying to pick up "trash" desktop boards with the fauly chipsets on the cheap.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

Cwapface posted:

Yeah, I mean, Apple could probably give two shits about faulty 3GB/sec ports. How many of their products are even capable of using more tha two SATA ports? One, the Mac Pro, which probably weren't going to be given Sandy Bridge architecture in the next six months anyway.

Mac Pros use the Xeon chipsets so they aren't affected by the SB issue.

sbyers77
Jan 9, 2004

Is there any easy way to check which SATA ports my devices are plugged into without opening up the case?

I got a pre-built system at work that's been recalled but my boss says hes cool with not returning it if nothing is plugged into the SATA 3Gbps ports. I'd rather not open it up to figure this out.

brainwrinkle
Oct 18, 2009

What's going on in here?
Buglord

sbyers77 posted:

Is there any easy way to check which SATA ports my devices are plugged into without opening up the case?

I got a pre-built system at work that's been recalled but my boss says hes cool with not returning it if nothing is plugged into the SATA 3Gbps ports. I'd rather not open it up to figure this out.

Device Manager -> Disk Drives -> Properties -> Location: Location 0 and 1 are 6 Gbps, the rest are 3 Gbps.

Tunga
May 7, 2004

Grimey Drawer
You can also just leave it as is and if you see a problem in 2-3 years then you can open it and move them to the 6Gb/s ports.

R1CH
Apr 7, 2002

The Ron Jeremy of the coding world
What if the problem is silent data corruption? By the time you notice it you could have significant corruption all over your drive. I personally wouldn't risk it.

sbyers77
Jan 9, 2004

brainwrinkle posted:

Device Manager -> Disk Drives -> Properties -> Location: Location 0 and 1 are 6 Gbps, the rest are 3 Gbps.

Hard drive is Location 0, Optical drive is Location 1. Looks like I am good to go!

Star War Sex Parrot
Oct 2, 2003

movax posted:

The ports? I was under the impression it is a combination of custom drive firmware that takes advantage of auxiliary/rarely used sections of the SATA specification.
They have two methods depending on the system/drive: one is via unused sections of SATA spec, the other is using proprietary connectors on the drive's jumper pins. They're trying to move everything to the former.

Shmoogy
Mar 21, 2007
Is there any difference between pci sata cards? One of my drives keeps disappearing, and I fear that if I keep it connected, I'm going to corrupt everything on it. I'm using 3 sata ports, and am adding an additional fourth sata drive soon. I don't think I can go 2-3 months for RMA before adding another drive.

2-3 sata 3 gb/s pci card that's as cheap and reliable as possible, any recommendations?

WhyteRyce
Dec 30, 2001

Shmoogy posted:

Is there any difference between pci sata cards? One of my drives keeps disappearing, and I fear that if I keep it connected, I'm going to corrupt everything on it. I'm using 3 sata ports, and am adding an additional fourth sata drive soon. I don't think I can go 2-3 months for RMA before adding another drive.

2-3 sata 3 gb/s pci card that's as cheap and reliable as possible, any recommendations?

This one has worked for me:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816115072

PCIe, supports SATA 6Gb/s and is pretty cheap. Uses the Marvel chip. Pretty much every other PCIe card is using either JMicron or Silicon Image chips.

WhyteRyce fucked around with this message at 01:40 on Feb 10, 2011

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Also check the drive's SMART info with CrystalDiskInfo. If this is the recall problem, you'll see a lot of UltraDMA CRC errors. Anything else, it could be the drive itself.

movax
Aug 30, 2008

Shmoogy posted:

Is there any difference between pci sata cards? One of my drives keeps disappearing, and I fear that if I keep it connected, I'm going to corrupt everything on it. I'm using 3 sata ports, and am adding an additional fourth sata drive soon. I don't think I can go 2-3 months for RMA before adding another drive.

2-3 sata 3 gb/s pci card that's as cheap and reliable as possible, any recommendations?

Yes, though the difference is not that big of a deal under Windows. Windows tends to have pretty solid driver support for all the major brands - Silicon Image, Marvell, etc. If you're in the NAS game, then manufacturer of storage controller becomes very important.

Veinless
Sep 11, 2008

Smells like motivation
I have 4 1TB drives in a RAID 0 on the 3Gb/s ports. If/when one of these ports dies, will I be OK moving the drive from the failed port to a 6Gb/s port?

That is, will the RAID array function fine after the move?

My guess is yes, but it depends one what happens when the port fails.

The 4 drives are in a 4x800GB RAID10 for OS, and 4x200GB short-stroked RAID0 for gaming, for the curious.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Anyone have any word on the LRDIMMs we'll be needing for the server / workstation grade version of Sandy Bridge? I'm seeing some odd release dates of the 20th this month being release dates for some of the new Xeons and am confused about when anything will actually happen from Intel, especially with the Cougar Point issues. I'm looking to build myself a workstation for work purposes primarily and would appreciate some timeline (and an idea of budget) on how much longer I'll need to keep doing all my work on a maxed out upgraded Macbook Pro attached to a NAS.

movax
Aug 30, 2008

Veinless posted:

I have 4 1TB drives in a RAID 0 on the 3Gb/s ports. If/when one of these ports dies, will I be OK moving the drive from the failed port to a 6Gb/s port?

That is, will the RAID array function fine after the move?

My guess is yes, but it depends one what happens when the port fails.

The 4 drives are in a 4x800GB RAID10 for OS, and 4x200GB short-stroked RAID0 for gaming, for the curious.

Oh god :cry: Intel Matrix RAID...last time I used it was on the ICH8R I think, so maybe it's gotten better since then. Anyways, chances are good that when that PLL goes down, it's taking all your 3Gbps ports with you, not just one. So I'd find a scratch drive somewhere, move your data, and switch to some other RAID solution.

quote:

Anyone have any word on the LRDIMMs we'll be needing for the server / workstation grade version of Sandy Bridge? I'm seeing some odd release dates of the 20th this month being release dates for some of the new Xeons and am confused about when anything will actually happen from Intel, especially with the Cougar Point issues. I'm looking to build myself a workstation for work purposes primarily and would appreciate some timeline (and an idea of budget) on how much longer I'll need to keep doing all my work on a maxed out upgraded Macbook Pro attached to a NAS.

Cougar Point is the desktop chipset, so server chipsets should be a-ok. Not sure about the LRDIMMs.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
General process problems from one business line can ripple into another (eg. server group goes "omg, triple check our poo poo, too!"). Maybe they're affected whatsoever, maybe not.

I'm kind of excited for the SAS controllers getting built into the Xeons. I'll be doing a RAID0 of SSDs and not relying so much upon the motherboard or an extra add-on card for RAID-0 will be a nice change of pace. Now, it does mean I'm going to choose vendor lockin, but I don't think I mind so much here given it's a build once and forget type of machine.

movax
Aug 30, 2008

necrobobsledder posted:

General process problems from one business line can ripple into another (eg. server group goes "omg, triple check our poo poo, too!"). Maybe they're affected whatsoever, maybe not.

It's not a process problem, it's a single transistor whose electrical specifications were violated specifically in the P67 and H67. I'm sure the server chipset group (hell, every group at every semi. company) has been triple-checking their poo poo nowadays after their management poo poo bricks at the thought of having do a $1bil recall themselves.

quote:

I'm kind of excited for the SAS controllers getting built into the Xeons. I'll be doing a RAID0 of SSDs and not relying so much upon the motherboard or an extra add-on card for RAID-0 will be a nice change of pace. Now, it does mean I'm going to choose vendor lockin, but I don't think I mind so much here given it's a build once and forget type of machine.

There are SAS controllers built into the new Xeons? :confused: Are you sure?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

movax posted:

It's not a process problem,
I should have clarified that in this thread of all places as business process, not fabrication or manufacturing process. Not quite sure what it'll take to keep this from happening in the business lines, but with GPUs and memory controllers and god knows what else merging onto dies, who knows what'll happen in the future?

movax posted:

There are SAS controllers built into the new Xeons? :confused: Are you sure?
Set for Romley which is supposed to be out this year and has been announced to have SAS controllers. http://www.glgroup.com/News/New-Intel-processor-chips-will-incorporate-SAS-controller-50602.html

I'm already biting my nails at the thought of how much LRDIMMs will cost and sulking back to FBDIMMs for my 12GB/16GB setup (depending upon the feature and cost matrix at release). I don't need the LRDIMM features for a stupid workstation, but on the other hand I do need to have the sort of configuration a customer might have in their datacenter when I'm doing some performance analysis for some stuff I'm writing now off the clock.

movax
Aug 30, 2008

necrobobsledder posted:

I should have clarified that in this thread of all places as business process, not fabrication or manufacturing process. Not quite sure what it'll take to keep this from happening in the business lines, but with GPUs and memory controllers and god knows what else merging onto dies, who knows what'll happen in the future?

Oh, gotcha.

quote:

Set for Romley which is supposed to be out this year and has been announced to have SAS controllers. http://www.glgroup.com/News/New-Intel-processor-chips-will-incorporate-SAS-controller-50602.html

Ooh, that is pretty cool (the controller is in PCH, not CPU BTW, it wouldn't make much sense to cram a SAS controller onto the CPU die; still guaranteed to be on mobo though!). I wonder what driver support will be like...

quote:

I'm already biting my nails at the thought of how much LRDIMMs will cost and sulking back to FBDIMMs for my 12GB/16GB setup (depending upon the feature and cost matrix at release). I don't need the LRDIMM features for a stupid workstation, but on the other hand I do need to have the sort of configuration a customer might have in their datacenter when I'm doing some performance analysis for some stuff I'm writing now off the clock.

Knowing pricing for Intel server platforms, users might as well start :a2m: to get a good price.

Cerri
Apr 27, 2006
Well, I've got 4 hard drives, so I am using the affected ports on my P8P67 Pro, but I'm not terribly worried about it. I haven't noticed any problems so far.

I got an email from Micro Center about it before I'd even heard about the issue, and it said they'll contact me again and handle my RMA as soon as replacements are available.

Sure, it's an inconvenience, but they're fully fixing the issue in what (so far, at least) is a timely manner, so I'm not that stuffed about it. I figure that's the risk I take for buying new tech the day it comes out. You don't want to deal with possible bugs/defects, wait six months, is my general rule of thumb.

SynVisions
Jun 29, 2003

Cerri posted:

Well, I've got 4 hard drives, so I am using the affected ports on my P8P67 Pro, but I'm not terribly worried about it. I haven't noticed any problems so far.

How so? I have 4 hard drives on my P8P67 Pro and I have them all on the unaffected 6Gb/s ports. The P8P67 Pro has 4 x 3Gb/s Intel (affected), 2 x 6Gb/s Intel (Unaffected) and 2 x 6Gb/s Marvell (Unaffected). The only thing I have an on affected port is my DVD drive.

R1CH
Apr 7, 2002

The Ron Jeremy of the coding world
From past experience the Marvell drivers are pretty awful. I much prefer to have my drives on the Intel SATA ports than touch the Marvell ones. I disabled it in the BIOS even.

Shmoogy
Mar 21, 2007
I think the Marvell drivers for Windows 7 x64 are fine, but other than that, they might not be that great.

Adbot
ADBOT LOVES YOU

frumpsnake
Jan 30, 2001

The sad part is, he wasn't always evil.
I'm sure they're fine for your porn drive but performance isn't exactly encouraging:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply