Rexz posted:The idea that in a few years we'll be running our file servers off spare Sandy Bridge stuff is fantastical - and completely true! I'm looking at buying a 2500 purely for a file and VM server. I want to run 6-8 low intensity servers on there - a DC, SQL, SAbnzbd, fileserver, mythTV (hence not the K - I want VT-d) etc. Is this realistic or is performance going to be too poor with that many machines? 8GB RAM and I'll be using an SSD for the OS to run from so hopefully drive performance should not be a bottleneck. Keen to use as little power as possible when nothing much is going on too. I'm in no hurry, since I will need all those SATA ports so I need to wait until that is all fixed. Is Bulldozer likely to be a better option? Not surprisingly, running a VM host is not a part of the benchmarking for most sites so I am largely guessing based on multithreading performance.. Any pointers?
|
|
# ? Feb 7, 2011 20:33 |
|
|
# ? May 9, 2024 06:13 |
|
FunkyUnderpants posted:Quick question: does anyone know if the Z68 chipset is going to be able to support dual pci express x16 slots instead of crippling them both to x8? I recall that nVidia chipsets a few revisions back could do this on their premium boards, but I've also heard that some suit between nVidia and Intel prevent nVidia from doing the same for us this time around. Z68 will just allow you to have the FDI* link between PCH and CPU (thus enabling integrated graphics) whilst also you to overclock like a madman. There are 24 PCI-Express 2.0 lanes available on a Sandy Bridge platform. 16 from the processor, 8 from the chipset. Processor can either offer 1 x16 or 2 x8. Chipset can offer 8 lanes, grouped "logically" in ports 0-3 and 4-7. Each of those groupings can become a x4 link. Most of those lanes disappear to onboard peripherals such as PCIe<->PCI bridges, USB 3.0, additional SATA controllers, etc. I don't foresee the CPU gaining any more lanes until a full redesign; it's more feasible to get a new PCH that has a larger PCIe Root Complex. I'd again argue it's unlikely that a new PCH with more lanes will help for graphics, because again, it'll be another grouping of 4 lanes, so you still wouldn't gain an additional x8 or x16 link. PCIe packet switches (aka bridges) can throw more lanes out there, but you're still constrained by the bandwidth of the upstream port. quote:I'm looking at buying a 2500 purely for a file and VM server. I want to run 6-8 low intensity servers on there - a DC, SQL, SAbnzbd, fileserver, mythTV (hence not the K - I want VT-d) etc. Is this realistic or is performance going to be too poor with that many machines? 8GB RAM and I'll be using an SSD for the OS to run from so hopefully drive performance should not be a bottleneck. Keen to use as little power as possible when nothing much is going on too. 6-8 unique VMs, or just 6-8 services on a single (or two) VMs? You will care most about # of threads and amount of RAM. MythTV sounds like the most CPU-hungry task you've listed (I assume your SQL server would be MSSQL or MySQL in a development role, not a production role). A 2600 (4 cores w/ HT) plus 16GB of RAM on the ASRock P67 board would be pretty cheap and powerful. Could try and find a nice H67 board as well (or Z68) so you don't have to waste watts driving a dedicated video card. (Or just buy a ATI Rage XL PCI card from your local shop for $5). I've blown my wad on drives recently, and find my E6600 faltering under heavy load doing fileserving, so I don't have money to upgrade, but that's what I'd do if I could. *FDI - Flexible Display Interface. IIRC, it is a PCI-Express style link (electrically and physically anyways, an AC-coupled differential transmission line) that pipes GPU data from the CPU to the PCH, where the PCH can output a variety of signals/combinations. Last generation Ibex Peak (5-Series) could do 2 outputs at most, but could pick from VGA, HDMI, DisplayPort and LVDS.
|
# ? Feb 7, 2011 20:55 |
|
brainwrinkle posted:What is the general consensus on load line calibration when overclocking Sandy Bridge? I've heard in a few places that it causes very short voltage spikes on the higher settings and the highest setting can increase the voltage under load. I've got my 2500k stable at 1.35V and medium (Level 3 in the ASRock BIOS) load line calibration. Would it be worth increasing the LLC to drop the Vcore a bit? Do any of the other voltages really matter for a ~4.5 Ghz overclock? FunkyUnderpants posted:Quick question: does anyone know if the Z68 chipset is going to be able to support dual pci express x16 slots instead of crippling them both to x8? I recall that nVidia chipsets a few revisions back could do this on their premium boards, but I've also heard that some suit between nVidia and Intel prevent nVidia from doing the same for us this time around.
|
# ? Feb 7, 2011 22:12 |
|
kyojin posted:I'm looking at buying a 2500 purely for a file and VM server. I want to run 6-8 low intensity servers on there - a DC, SQL, SAbnzbd, fileserver, mythTV (hence not the K - I want VT-d) etc. Is this realistic or is performance going to be too poor with that many machines? 8GB RAM and I'll be using an SSD for the OS to run from so hopefully drive performance should not be a bottleneck. Keen to use as little power as possible when nothing much is going on too. We have an older dual-quad at work with 12GB memory (~7GB used), runs about 10 LInux and Windows server VM's of various duties, and I'm probably only using ~ 1500MHz of CPU most of the time. I could probably run 30 more if I had another 12GB of RAM. You should be fine.
|
# ? Feb 7, 2011 22:20 |
|
kyojin posted:I'm looking at buying a 2500 purely for a file and VM server. I want to run 6-8 low intensity servers on there - a DC, SQL, SAbnzbd, fileserver, mythTV (hence not the K - I want VT-d) etc. Is this realistic or is performance going to be too poor with that many machines? 8GB RAM and I'll be using an SSD for the OS to run from so hopefully drive performance should not be a bottleneck. Keen to use as little power as possible when nothing much is going on too. The answer to your question depends entirely on what load these servers are going to deal with. The maximum number of VMs you can run on any computer is more ram limited than anything else.
|
# ? Feb 7, 2011 22:22 |
movax posted:6-8 unique VMs, or just 6-8 services on a single (or two) VMs? Unique VMs if possible. My file/sql/etc server has just taken a dump all over itself and I want to start putting services into silos. I know it's a bit unnecessary, but I need to upgrade anyway and playing with multiple VMs will be interesting fun. I looked at the H67, but the P67 boards have more SATA ports - possibly because the onchip gpu is disabled? Not sure if that using PCIE lanes or not so I am not sure how that will be on the Z68 boards and as you say, graphics cards are cheap. My plan was to start with 2x4GB and see how that goes - an sql server with maybe 3 simultaneous users at most should be alright with 512MB for instance, and myth will be capturing digital from DVB-T/S so that shouldn't need much grunt either. Thanks for the tips - I'm aiming to go for minimal linux builds as much as possible, and users are just me and the girlfriend and our 5 xbmcs so my instinct is that this should be pretty viable. Bob Morales posted:You should be fine. Swish, sounds like your setup probably sees a lot more use than mine would. Cheers kyojin fucked around with this message at 22:42 on Feb 7, 2011 |
|
# ? Feb 7, 2011 22:38 |
|
Alereon posted:Never use loadline calibration, it will cause voltage overshoots when exiting load conditions that can damage the processor (or at the very least cause hangs or crashes). If the voltage isn't high enough under load but is within safe limits when idle, just increase the voltage setting. Do you have a source for this? I tried it with LLC turned completely off and I drop about .07v under load. Is that normal? It seems my 2500K is stable at 4.4 Ghz at 1.32v under load, which would mean I would need a 1.39v Vcore with no LLC versus 1.35v with medium LLC. That seems a bit high.
|
# ? Feb 8, 2011 00:35 |
|
brainwrinkle posted:Do you have a source for this? I tried it with LLC turned completely off and I drop about .07v under load. Is that normal? It seems my 2500K is stable at 4.4 Ghz at 1.32v under load, which would mean I would need a 1.39v Vcore with no LLC versus 1.35v with medium LLC. That seems a bit high.
|
# ? Feb 8, 2011 00:51 |
|
Alereon posted:Here's an Anandtech article about CPU power delivery and Loadline Calibration. It was written for the 45nm Core 2 Quads, though the principles are the same (and 32nm CPUs will be even more sensitive to voltage transients). A .07v drop under load doesn't seem abnormal, mine can be up to .10v on my Penryn system, though the voltages are also higher proportionally (and that's when pushing my CPU past the point where current starts getting retarded). Wow, interesting. Thanks for the information! I'll definitely turn LLC off then. Do you think it would be safe to set the Vcore to 1.39 at idle? Is the "safe" Vcore measured at load or idle? I know 1.35v-ish is considered safe for Sandy Bridge. Is there any way to have the idle voltage drop with Speedstep like my Q6600 did?
|
# ? Feb 8, 2011 01:20 |
|
kyojin posted:Unique VMs if possible. My file/sql/etc server has just taken a dump all over itself and I want to start putting services into silos. I know it's a bit unnecessary, but I need to upgrade anyway and playing with multiple VMs will be interesting fun. quote:I looked at the H67, but the P67 boards have more SATA ports - possibly because the onchip gpu is disabled? Not sure if that using PCIE lanes or not so I am not sure how that will be on the Z68 boards and as you say, graphics cards are cheap. My plan was to start with 2x4GB and see how that goes - an sql server with maybe 3 simultaneous users at most should be alright with 512MB for instance, and myth will be capturing digital from DVB-T/S so that shouldn't need much grunt either.
|
# ? Feb 8, 2011 03:56 |
|
brainwrinkle posted:Wow, interesting. Thanks for the information! I'll definitely turn LLC off then. Do you think it would be safe to set the Vcore to 1.39 at idle? Is the "safe" Vcore measured at load or idle? I know 1.35v-ish is considered safe for Sandy Bridge. Is there any way to have the idle voltage drop with Speedstep like my Q6600 did? If anyone else has an ASRock P67 board, I figured out that using offset voltage settings allows the processor to undervolt while idle. It also seems to have better stability and voltage drop characteristics under load with LLC disabled so far, so I'd highly recommend using offset voltage over static. Thanks again for the help Alereon. I'll take the rest to the Overclocking megathread.
|
# ? Feb 8, 2011 04:47 |
|
Intel are resuming shipping faulty chipsets to manufacturers whose products won't use the 3GB/sec ports. What's the bet Apple threw a loving hissy fit about Intel's fuckup and threatened all kinds of poo poo if they don't let them announce and ship the SB MBPs and iMacs everybody's expecting?
|
# ? Feb 8, 2011 16:47 |
|
Cwapface posted:Intel are resuming shipping faulty chipsets to manufacturers whose products won't use the 3GB/sec ports. What's the bet Apple threw a loving hissy fit about Intel's fuckup and threatened all kinds of poo poo if they don't let them announce and ship the SB MBPs and iMacs everybody's expecting? I was about to post and suggest that it was probably Apple that was able to persuade/force them to send them the boards.
|
# ? Feb 8, 2011 16:55 |
|
Yeah, I mean, Apple could probably give two shits about faulty 3GB/sec ports. How many of their products are even capable of using more tha two SATA ports? One, the Mac Pro, which probably weren't going to be given Sandy Bridge architecture in the next six months anyway.
|
# ? Feb 8, 2011 17:02 |
|
The 27" iMacs have 3 SATA ports, for SSD+HDD+Optical configurations.
|
# ? Feb 8, 2011 17:16 |
|
they might all be custom, though, because they use custom temperature sensors on their HDDs.
|
# ? Feb 8, 2011 17:16 |
|
fleshweasel posted:they might all be custom, though, because they use custom temperature sensors on their HDDs. The ports? I was under the impression it is a combination of custom drive firmware that takes advantage of auxiliary/rarely used sections of the SATA specification. The good news though, is that for some reason if you absolutely don't want to RMA your board (or can't), Intel is implicitly assuring us all that the failure will stop at the 3Gb/s ports, and that no other sections of the PCH are known to be faulty/degradable. So I'd be more confident than ever in trying to pick up "trash" desktop boards with the fauly chipsets on the cheap.
|
# ? Feb 8, 2011 18:00 |
|
Cwapface posted:Yeah, I mean, Apple could probably give two shits about faulty 3GB/sec ports. How many of their products are even capable of using more tha two SATA ports? One, the Mac Pro, which probably weren't going to be given Sandy Bridge architecture in the next six months anyway. Mac Pros use the Xeon chipsets so they aren't affected by the SB issue.
|
# ? Feb 8, 2011 18:15 |
|
Is there any easy way to check which SATA ports my devices are plugged into without opening up the case? I got a pre-built system at work that's been recalled but my boss says hes cool with not returning it if nothing is plugged into the SATA 3Gbps ports. I'd rather not open it up to figure this out.
|
# ? Feb 9, 2011 06:39 |
|
sbyers77 posted:Is there any easy way to check which SATA ports my devices are plugged into without opening up the case? Device Manager -> Disk Drives -> Properties -> Location: Location 0 and 1 are 6 Gbps, the rest are 3 Gbps.
|
# ? Feb 9, 2011 06:45 |
|
You can also just leave it as is and if you see a problem in 2-3 years then you can open it and move them to the 6Gb/s ports.
|
# ? Feb 9, 2011 12:16 |
|
What if the problem is silent data corruption? By the time you notice it you could have significant corruption all over your drive. I personally wouldn't risk it.
|
# ? Feb 9, 2011 19:37 |
|
brainwrinkle posted:Device Manager -> Disk Drives -> Properties -> Location: Location 0 and 1 are 6 Gbps, the rest are 3 Gbps. Hard drive is Location 0, Optical drive is Location 1. Looks like I am good to go!
|
# ? Feb 9, 2011 21:07 |
|
movax posted:The ports? I was under the impression it is a combination of custom drive firmware that takes advantage of auxiliary/rarely used sections of the SATA specification.
|
# ? Feb 9, 2011 21:10 |
|
Is there any difference between pci sata cards? One of my drives keeps disappearing, and I fear that if I keep it connected, I'm going to corrupt everything on it. I'm using 3 sata ports, and am adding an additional fourth sata drive soon. I don't think I can go 2-3 months for RMA before adding another drive. 2-3 sata 3 gb/s pci card that's as cheap and reliable as possible, any recommendations?
|
# ? Feb 9, 2011 22:08 |
|
Shmoogy posted:Is there any difference between pci sata cards? One of my drives keeps disappearing, and I fear that if I keep it connected, I'm going to corrupt everything on it. I'm using 3 sata ports, and am adding an additional fourth sata drive soon. I don't think I can go 2-3 months for RMA before adding another drive. This one has worked for me: http://www.newegg.com/Product/Product.aspx?Item=N82E16816115072 PCIe, supports SATA 6Gb/s and is pretty cheap. Uses the Marvel chip. Pretty much every other PCIe card is using either JMicron or Silicon Image chips. WhyteRyce fucked around with this message at 01:40 on Feb 10, 2011 |
# ? Feb 10, 2011 01:35 |
|
Also check the drive's SMART info with CrystalDiskInfo. If this is the recall problem, you'll see a lot of UltraDMA CRC errors. Anything else, it could be the drive itself.
|
# ? Feb 10, 2011 01:56 |
|
Shmoogy posted:Is there any difference between pci sata cards? One of my drives keeps disappearing, and I fear that if I keep it connected, I'm going to corrupt everything on it. I'm using 3 sata ports, and am adding an additional fourth sata drive soon. I don't think I can go 2-3 months for RMA before adding another drive. Yes, though the difference is not that big of a deal under Windows. Windows tends to have pretty solid driver support for all the major brands - Silicon Image, Marvell, etc. If you're in the NAS game, then manufacturer of storage controller becomes very important.
|
# ? Feb 10, 2011 03:08 |
|
I have 4 1TB drives in a RAID 0 on the 3Gb/s ports. If/when one of these ports dies, will I be OK moving the drive from the failed port to a 6Gb/s port? That is, will the RAID array function fine after the move? My guess is yes, but it depends one what happens when the port fails. The 4 drives are in a 4x800GB RAID10 for OS, and 4x200GB short-stroked RAID0 for gaming, for the curious.
|
# ? Feb 10, 2011 16:57 |
|
Anyone have any word on the LRDIMMs we'll be needing for the server / workstation grade version of Sandy Bridge? I'm seeing some odd release dates of the 20th this month being release dates for some of the new Xeons and am confused about when anything will actually happen from Intel, especially with the Cougar Point issues. I'm looking to build myself a workstation for work purposes primarily and would appreciate some timeline (and an idea of budget) on how much longer I'll need to keep doing all my work on a maxed out upgraded Macbook Pro attached to a NAS.
|
# ? Feb 10, 2011 17:38 |
|
Veinless posted:I have 4 1TB drives in a RAID 0 on the 3Gb/s ports. If/when one of these ports dies, will I be OK moving the drive from the failed port to a 6Gb/s port? Oh god Intel Matrix RAID...last time I used it was on the ICH8R I think, so maybe it's gotten better since then. Anyways, chances are good that when that PLL goes down, it's taking all your 3Gbps ports with you, not just one. So I'd find a scratch drive somewhere, move your data, and switch to some other RAID solution. quote:Anyone have any word on the LRDIMMs we'll be needing for the server / workstation grade version of Sandy Bridge? I'm seeing some odd release dates of the 20th this month being release dates for some of the new Xeons and am confused about when anything will actually happen from Intel, especially with the Cougar Point issues. I'm looking to build myself a workstation for work purposes primarily and would appreciate some timeline (and an idea of budget) on how much longer I'll need to keep doing all my work on a maxed out upgraded Macbook Pro attached to a NAS. Cougar Point is the desktop chipset, so server chipsets should be a-ok. Not sure about the LRDIMMs.
|
# ? Feb 10, 2011 17:52 |
|
General process problems from one business line can ripple into another (eg. server group goes "omg, triple check our poo poo, too!"). Maybe they're affected whatsoever, maybe not. I'm kind of excited for the SAS controllers getting built into the Xeons. I'll be doing a RAID0 of SSDs and not relying so much upon the motherboard or an extra add-on card for RAID-0 will be a nice change of pace. Now, it does mean I'm going to choose vendor lockin, but I don't think I mind so much here given it's a build once and forget type of machine.
|
# ? Feb 10, 2011 18:55 |
|
necrobobsledder posted:General process problems from one business line can ripple into another (eg. server group goes "omg, triple check our poo poo, too!"). Maybe they're affected whatsoever, maybe not. It's not a process problem, it's a single transistor whose electrical specifications were violated specifically in the P67 and H67. I'm sure the server chipset group (hell, every group at every semi. company) has been triple-checking their poo poo nowadays after their management poo poo bricks at the thought of having do a $1bil recall themselves. quote:I'm kind of excited for the SAS controllers getting built into the Xeons. I'll be doing a RAID0 of SSDs and not relying so much upon the motherboard or an extra add-on card for RAID-0 will be a nice change of pace. Now, it does mean I'm going to choose vendor lockin, but I don't think I mind so much here given it's a build once and forget type of machine. There are SAS controllers built into the new Xeons? Are you sure?
|
# ? Feb 10, 2011 19:03 |
|
movax posted:It's not a process problem, movax posted:There are SAS controllers built into the new Xeons? Are you sure? I'm already biting my nails at the thought of how much LRDIMMs will cost and sulking back to FBDIMMs for my 12GB/16GB setup (depending upon the feature and cost matrix at release). I don't need the LRDIMM features for a stupid workstation, but on the other hand I do need to have the sort of configuration a customer might have in their datacenter when I'm doing some performance analysis for some stuff I'm writing now off the clock.
|
# ? Feb 10, 2011 20:40 |
|
necrobobsledder posted:I should have clarified that in this thread of all places as business process, not fabrication or manufacturing process. Not quite sure what it'll take to keep this from happening in the business lines, but with GPUs and memory controllers and god knows what else merging onto dies, who knows what'll happen in the future? Oh, gotcha. quote:Set for Romley which is supposed to be out this year and has been announced to have SAS controllers. http://www.glgroup.com/News/New-Intel-processor-chips-will-incorporate-SAS-controller-50602.html Ooh, that is pretty cool (the controller is in PCH, not CPU BTW, it wouldn't make much sense to cram a SAS controller onto the CPU die; still guaranteed to be on mobo though!). I wonder what driver support will be like... quote:I'm already biting my nails at the thought of how much LRDIMMs will cost and sulking back to FBDIMMs for my 12GB/16GB setup (depending upon the feature and cost matrix at release). I don't need the LRDIMM features for a stupid workstation, but on the other hand I do need to have the sort of configuration a customer might have in their datacenter when I'm doing some performance analysis for some stuff I'm writing now off the clock. Knowing pricing for Intel server platforms, users might as well start to get a good price.
|
# ? Feb 10, 2011 21:40 |
|
Well, I've got 4 hard drives, so I am using the affected ports on my P8P67 Pro, but I'm not terribly worried about it. I haven't noticed any problems so far. I got an email from Micro Center about it before I'd even heard about the issue, and it said they'll contact me again and handle my RMA as soon as replacements are available. Sure, it's an inconvenience, but they're fully fixing the issue in what (so far, at least) is a timely manner, so I'm not that stuffed about it. I figure that's the risk I take for buying new tech the day it comes out. You don't want to deal with possible bugs/defects, wait six months, is my general rule of thumb.
|
# ? Feb 11, 2011 13:55 |
|
Cerri posted:Well, I've got 4 hard drives, so I am using the affected ports on my P8P67 Pro, but I'm not terribly worried about it. I haven't noticed any problems so far. How so? I have 4 hard drives on my P8P67 Pro and I have them all on the unaffected 6Gb/s ports. The P8P67 Pro has 4 x 3Gb/s Intel (affected), 2 x 6Gb/s Intel (Unaffected) and 2 x 6Gb/s Marvell (Unaffected). The only thing I have an on affected port is my DVD drive.
|
# ? Feb 11, 2011 21:19 |
|
From past experience the Marvell drivers are pretty awful. I much prefer to have my drives on the Intel SATA ports than touch the Marvell ones. I disabled it in the BIOS even.
|
# ? Feb 11, 2011 21:32 |
|
I think the Marvell drivers for Windows 7 x64 are fine, but other than that, they might not be that great.
|
# ? Feb 11, 2011 22:23 |
|
|
# ? May 9, 2024 06:13 |
|
I'm sure they're fine for your porn drive but performance isn't exactly encouraging:
|
# ? Feb 11, 2011 23:07 |