Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, seconded - I have been running 3 as a software RAID5 in my home server for the past month or so and they've been flawless. Shucking all three was pretty quick with a set of guitar picks to pry the tabs open. The drives inside were completely standard 8TB Reds, same label as the bare version.

Even in a crowded microATX case with only one slow exhaust fan they stayed under 50C, and now that I've added an intake at the front they top out around 42.

Adbot
ADBOT LOVES YOU

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I have a Dell T20 with three of the 8GB HGST drives added about a month ago (at least I think they're actually HGST, they're WD Reds stripped out of the $180 externals from Best Buy) running in a RAID 5, along with a Seagate 2TB 3.5" that I use standalone. The 2TB is in the top bay with an 8TB right below it and the other 2x8TB in the bracket on the bottom of the case.

They were running a bit too hot for my comfort with just the stock exhaust fan, so I added a 92mm intake fan right behind the metal front wall with an adapter to take it down to 5V and here's what I'm seeing right now after an hour straight of writes:

[<user>@<server> ~]$ hddtemp
/dev/sda: WDC WD80EFAX-68LHPN0: 48°C
/dev/sdb: WDC WD80EFAX-68LHPN0: 45°C
/dev/sdc: WDC WD80EFAX-68LHPN0: 41°C
/dev/sdd: CT240BX200SSD1: 28°C
/dev/sde: ST2000DL001-9VT156: 39°C

Still warmer than I'd like, although not seriously concerning. I plan to get a 120mm slim fan that can fit behind the plastic front panel and see if that improves results over the 92mm one.

Eletriarnation fucked around with this message at 16:49 on Nov 5, 2017

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Thermopyle posted:

Windows Home Server did this.

Storage Spaces in Windows 10 still does this, if I recall correctly - you designate the drives as being used for Storage Spaces, then just carve out whatever pattern of mirrored/striped/parity logical arrays you want with the space available. I used it in my desktop for my previous 3x3TB RAID 5 for several months until I decided to migrate to my always-on Linux box and mdadm.

Eletriarnation fucked around with this message at 16:54 on Nov 5, 2017

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Oh hey, I have that exact computer with an E8600 as my HTPC.

You can't use that proc in a remotely new machine - you could try to find some used S775 motherboard and build a system around it, but realistically you'd be better off getting something like a mini-ITX board with embedded Celeron over that for several reasons - reliability, power consumption, modern motherboard features, the fact that you wouldn't actually save much money building a new system around an ancient processor when all the other parts are the same cost... you get the idea.

You could also get a USB3 card or eSATA adapter for that Dell and an external multi-drive enclosure to plug in to the faster interface, but you'll pay a substantial portion of the cost of a new system for that. If I recall correctly, the PCIe x1 slot is directly under the x16 slot as well so if you're using an add-in GPU instead of onboard video it may be blocked.

e: really though, that thing uses like 65W at least idling. If you leave it on all day and you're paying $0.10/kWh, it's going to cost you an extra $50 a year. An embedded Celeron motherboard with 4GB RAM would cost like $100 and use less than half as much power. I considered replacing mine with the J3455 NUC for that reason but I only leave it running when watching something on it, so it was less compelling.

Eletriarnation fucked around with this message at 06:56 on Nov 6, 2017

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
You gotta be the change you want to see, not snipe from another thread. poo poo posts come from poo poo posters and not the thread aether. If you get sick of posting back at them you can always vote 1 and move on. :shrug:

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Has anyone here experimented with using a high-performance single board computer with a multiple drive USB 3.0-SATA enclosure as a NAS? It seems like 4-drive enclosures with fans are available around $100 from multiple brands and something like a $65 Odroid-XU4 would be more than capable of running mdadm RAID 5 calculations while using a lot less power than a full x86 desktop, but I'm not sure if there would be any hidden bottlenecks from using USB or reliability concerns with the enclosure.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, those are both interesting. I'd definitely want to have support for at least 4 3.5" drives but the Helios looks almost exactly like what I have in mind, basically something that would allow me to play around with different OSes and configurations more on my x86 server without having downtime on my network storage.

I also noticed this 6-drive unit which has some nice features but I'm concerned that the router SoC running it wouldn't be quick enough to keep up with software RAID 5. The description even mentions RAID 0 and 1 specifically but no others.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Careful Drums posted:

Its a gaming rig from ~2008, has 8gb of ram and a core 2 duo. I'd remove the video card, get a new power supply, and very likely a new mobo since its been collecting dust for a decade now. I want a storage solution for movies and music and to keep all my family's 'data' in one place like all the financial records, dashcam recordings, etc. I'd also like to do fun stuff like run my own git server instead of relying on github. For now all this stuff just sits on a hard disk on my current gaming rig but i'm constantly afraid of hard disk failure.


e: and the gobs and gobs of photos of our kids that are sitting out in iCloud and dropbox

So... I had a Raspberry Pi as my NAS/torrent box for a long time, and like everyone else said it works but sucks because I/O is all through one USB2.0 bus. If I were to try something like that with a single-board computer again I'd use a fuller-featured one like this.

However, wanting to also add RAID I switched to a Haswell E3 Xeon which removed all of the performance bottlenecks except the disks themselves and worked great. After setting up Plex, VNC, Samba, mdadm, etc. though I decided that I wanted to try some different distros and other configuration changes to play around with it more as a virtualization host but didn't want to lose all of the services I already set up.

I ended up migrating all of the services to a Core 2 Quad with 4GB of RAM in an old Dell desktop, and that works just as well except for using more power. If your old gaming machine seems to be working OK, you could give it a try - a good quality motherboard might still have plenty of life left after a decade if it hasn't been mistreated and that goes double for the processor. Running things like a fileserver or Git doesn't take much processing power or RAM at all and even a C2D should still have no problems. I'd have set it up with my old AGP/Pentium M board just for kicks but I'd have to add a discrete GPU and power consumption would have been terrible.

Eletriarnation fucked around with this message at 22:38 on Feb 26, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Careful Drums posted:

Any idea how expensive power consumption gets with this approach?

The Haswell Xeon system, a Poweredge T20, idles around 30W with just the SSD system disk going and with all 4 3.5" drives spinning uses ~65W. This is with the stock Dell PSU, which doesn't have an obvious 80+ rating label but is probably pretty efficient since it's recent and 5V/12V only. The C2Q system uses around 15W more in both cases with a Seasonic 80+ Platinum PSU. I also tried my old Nehalem gaming rig with the same Seasonic back before getting the T20, but even undervolted to 0.9V it idled at 70W - I assume due to a more power-hungry memory controller/chipset and the discrete graphics card I had to use with it. All these are measured at the wall outlet with a Kill-a-Watt. I don't ever do anything that really pushes the CPU hard, so I can't say what the numbers look like going all out but you could probably just add 80% of the TDP and not be too far off.

At $0.10/kWH, 65W constant costs $0.156 for one day or ~$57/year so you can extrapolate from there based on your situation.

Eletriarnation fucked around with this message at 00:56 on Feb 28, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Eh, that was a few years back - it's probably 11 and change now. Not worth it to be drinking Duke Energy's coal ash anyway.

This may be a really rare issue and also may be more relevant to the Linux thread, but I figured I'd mention it in case anyone else is considering using Samba with NFS on the same share. If you follow the RHCSA cert guide for this stuff like me and you set Samba up first, you might not realize that different SELinux file contexts are mutually exclusive and can't overlap. When the guide tells you to use semanage-fcontext to set the nfs_t context, it should probably include this warning as well that's in the official RHEL documentation:

quote:

Depending on policy configuration, services, such as Apache HTTP Server and MariaDB, may not be able to read files labeled with the nfs_t type. This may prevent file systems labeled with this type from being mounted and then read or exported by other services.

What this means in practice is, if you have a perfectly functional Samba share and you set it all to nfs_t then you will break it; smbd will core dump with lots of errors (although it seems to recover gracefully) and assuming your client is Windows it will return authentication errors instead of connecting normally.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
That might come down to the fact that it takes a while to detect this issue, such that the subsequent generations might have been already planned out before they figured out what mistake they made.

I mean, imagine if you had a bug where a certain counter was overflowing instead of resetting properly but this counter ticks up slowly so it takes several months to hit maximum. I've seen a couple like this, so it's not purely a hypothetical. If no one detected the bug in internal reviews and it got released, you could very well have multiple releases out with it before a customer saw the overflow and raised a ticket.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Oh, definitely - I'm not trying to comment on Intel's response to the issue, just explaining how the bug might have made it into subsequent generations before being noticed in the first.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Takes No Damage posted:

So I've been sitting around with 3 95% full HDs for like 2 years and we all know it's just a matter of time before one junks out and I lose a bunch of poo poo. I do have most of my favorite things backed up to a cloud service but I'd like to have something more local that I can treat like a giant HD and just FTP all my big files to, and I'd like some redundancy so it could take at least a single drive failure.

Am I right in thinking my best bang-for-buck choice is to just build a tiny PC with a bunch of HD slots and run FreeNAS from a USB/small internal drive? Late last year there were some people throwing around system parts lists, are any of those still current?

In my ideal scenario I'd be able to start with a pair of drives and just casually add more as I need them. I'll also most likely be running my Plex server off of it but that's currently running from my 4yr old Lubuntu desktop so my streaming needs are pretty light as it is. Would there be any issue backing stuff up from both Windows and Linux machines? Recommended/ideal RAID flavor to use?

Does this still sound like a decent solution for me, and if so is there a babby's first NAS parts list anyone can point me to?

If you're talking about bang for buck, here's your semi-comedy ultra budget refurb option.
CPU: Xeon X3440 - $15
CPU cooler: Intel stock - $9
Motherboard: Supermicro X8SIL-F - $40 (the non-F version is cheaper but this has out of band management)
RAM: 2x 4GB Hynix DDR3-1866 ECC UDIMM - $44 (will need to run at 1333 but that's OK, this stuff has timings all the way down to 800)
Case: Fractal Design Node 804 - $106
PSU: whatever you feel like paying for
Total: $214 before PSU and drives

Obviously you could get a cheaper case and go even lower, but how many non-gargantuan cases come with 10 3.5" bays? Even if you need to add a SATA/SAS controller to use them all, you're set for a few years down the road when your first RAID fills up and you have to add a second one.

Eletriarnation fucked around with this message at 21:32 on Jun 15, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
The X8SI6 is fine too, just note that it's full ATX and you'll need to pick a different case from the Node 804 for it.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
drat, Paul beat me to it but the J4105 version is very nearly as fast and $30 cheaper. There are also mATX versions of each if you need multiple add-in cards and want to trade SATA ports for PCIe slots.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
If you can find an L3110 anywhere, it's the only low power dual core they ever made for LGA775. I got one for $13 a while back on eBay but don't see any comparable prices right now.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
As long as it's 3.0 or newer you're not going to see much difference in USB and eSATA, the drive itself should be the limiting factor.

e: Specifically, USB 3.0 is 5Gbps and SATA3 is 6Gbps - I think SATA has less overhead, but most hard drives are going to struggle to break SATA2 speeds with any kind of sustained transfer anyway.

Eletriarnation fucked around with this message at 00:40 on Jun 17, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I'm not sure if you will run into odd issues using Blue drives in an array or not; in the past, some consumer drives had issues with slow response time causing them to drop out of arrays. Someone else might know more about how recent models work. You could get the Red drives which are intended for arrays and 24/7 uptime or Seagate's equivalent (Ironwolf, lol) but they do cost more.

Also, although it will be more expensive you will use less power and have less chance of losing the array to a multi-drive failure if you use fewer and larger drives. I was able to put together a relatively inexpensive 3x8TB array by shucking WD externals from Best Buy (a particular model has Reds on the inside), but even if you want to stick to internal drives you could get 4x3TB or 3x4TB for not much more.

Finally, I don't know if it makes a big difference to use FreeNAS from an SSD or if you could get by with a flash drive. The X8SIL even has an internal Type-A port for a recovery/system USB drive.

Eletriarnation fucked around with this message at 17:38 on Jun 19, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I don't have any ideas to add, but I feel like that should work - I have an X8SIL-F which is the same chipset and its SATA ports work fine for boot disks. I also wonder if there's some screwed-up BIOS setting or something plugged in that's messing with it.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
If you want a couple of drive bays and enough processing power to do trans/encoding for cheap, you can pay about the same amount for a microserver as you would for a small NAS. Dell sells a refurb Poweredge T30 with a Skylake E3 Xeon for ~$300 after coupon right now, and I've seen similarly-priced deals for new ones in the past. The only real hitch is that the cases are a little cramped and have proprietary PSUs, but there are enough SATA connections to fill every bay and long term you can get a PSU adapter if you need to replace it.

You can also go back a few generations like I did and get a X8SIL-F, a Xeon X3440, and 8GB ECC DDR3 for under $100 from eBay, then add your own case and PSU. Power consumption is a little worse, but not enough to make the cost difference back inside of a decade; I see around 45-50W when idling with 4 3.5" disks spinning and just going from their spec sheet the disks are probably 2/3 of that.

Eletriarnation fucked around with this message at 01:37 on Aug 25, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I'm not sure if I would choose a motherboard solely on the basis of native SATA ports, since there are cheap used 12+ port SAS controllers out there. If you really want a lot of drives you can get one of those and go right on up as long as you have a spare x8 or x16 slot for it.

To use an accessible example, the X8SIL I mentioned in my last post as a low-cost but full-featured used mobo has 6 native ports (and they're only SATA2). However, with three x8 slots I'm sure that I will run out of drive bays in any case I would care to use before I have issues with being able to add ports to it.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
It's hard to beat the Node 804's ten 3.5" bays at the same cost or size, let alone both. I'm using an old full tower case that I don't want to throw away or use anywhere else, but I bought a microATX board for it partly so I'd have the option to use an 804 if I ever want to add more drives than would fit right now.

As far as the PSU goes keep in mind that your processor will be essentially idling doing NAS work. Even with nine drives, if you have a recent i3 then you won't ever go much above 150W and will probably be closer to 100 most of the time so if you're going to spend for anything, spend for efficiency. I use a 400W Seasonic Platinum-rated fanless model and if I wanted to pick out something cheaper would probably just go for a similar-capacity 80+ plain or bronze model.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I think the secret is that you can remove the 5.25" slim optical drive and replace it with two 2.5" drives, plus mount two 3.5" drives up top in addition to the two below for a total of six. At least, I have the T20 and it works like that if I recall correctly.

It's a great box for the price if you just want to host VMs or whatever, but the proprietary power supply and mediocre airflow make it not the best NAS in my experience unless you plan to have <=2 3.5" drives. I had to wedge in an additional fan (there are no additional fan mounts) to keep my RAID5 cool and I only have the exact number of SATA power connectors needed to have one for every drive bay, so there's not much choice in how the drives are cabled.

Eletriarnation fucked around with this message at 21:00 on Sep 20, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yup, looks identical to the T20.

That third bay on top made of sheet metal doesn't totally count, because the front panel cable is running through that space and I wasn't able to actually fit a regular drive far enough in to secure it well as a result. I used it with a 2.5"->3.5" adapter to mount an SSD. The two above it with the plastic sleds work fine, but if both are filled and constantly spinning they will run pretty hot without additional airflow.

Adapters are out there to connect ATX power supplies to that nonstandard 8-pin, but it is an additional thing to consider for anyone who might be thinking of connecting a large GPU since there are no PCIe power connectors on the stock 280W supply. It said not to put more than 25W in the top PCIe slot but I think that's probably not a real concern, I had no issues running an RX 460 in it.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, that's more than enough; I run a CentOS box with Plex/Samba/torrents going on a first gen i7 Xeon (X3440) with 8GB RAM and it basically idles with 7GB available unless it's transcoding something or I'm using VNC.

Eletriarnation fucked around with this message at 17:52 on Oct 15, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

H110Hawk posted:

What? Ram and cpus draw way more peak voltage than 2 spinning disks. As you said 20w for both disks vs easily 100w for the cpu+ ram. I don't think they have poor life choices in 16gb of ram.

Your 100W there breaks down to 95W for the CPU and 5W for the RAM; it's pretty safe to just worry about the former, as the spinning disks actually will draw more power than the RAM.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

H110Hawk posted:

Look at me I am wrong on the internet. I was remembering back to anecdotal evidence from 5? Years ago now where upgrading ram in a rack of servers caused them to blow breakers. We had added amps to the rack by doubling the dimm count. Guess it was a red herring or voltages/types have dropped dramatically in wattage. Could have also been that they were able to work harder so their cpus drew more power?

It's an understandable mixup considering that servers have a lot more RAM, RDIMMs (or FBDIMMs) use a lot more power, and if your old servers were old enough to use DDR2 or DDR1 then between higher current and higher voltage your wattage goes up substantially from that too.

Standard DDR3-1600 @ 1.5V is about 2.5-3W for an 8GB DIMM and DDR4 is going to be less though, from what I am seeing.

e:

necrobobsledder posted:

Google folks have been saying they’re running into electrical code issues where they can’t just add more racks, it’s not that power cost or cooling itself is the issue - they’ve hit a wall of bureaucracy / code which explains the expansion into alternative power beyond the liberal brownie points.

I work for a networking vendor with lots of labs full of 10kW+ boxes and was told several years ago when I started that at this location we were basically drawing as much power into our buildings (at least the ones which have large labs) as the local power company would allow. I cannot imagine this situation has improved much, considering how power density has increased per-RU.

The labs also run into occasional infrastructure issues with how much power they can deliver to a given area because they were designed several years ago around a lot less average draw per rack. I've tripped a breaker before by rebooting two full-rack chassis at once and causing all eight fan trays to spin up to full speed at the same time.

Eletriarnation fucked around with this message at 18:06 on Nov 13, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I use CentOS + samba + mdadm RAID 5. Works great.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

pgroce posted:

Getting around to replacing the motherboard in my FreeNAS server since the last one seems to have died. My existing drives are SATA, and I'd like the headroom to run some Docker images (home network services, specifically Emby and Logitech Media Server). My price point is <$1000, ideally a lot less.

Any recommendations?

My NAS runs CentOS on an X8SIL-F with X3440, a combo which would have no problem with Docker etc. and is closer to $50 than $100; even with 2x4GB of ECC DDR3 added in you're looking at under $100 total. What kind of motherboard are you replacing that you're thinking about $1000 as a reference point? I assume there's a particular processor that still works and you intend to use it again, which will definitely be important to consider.

Eletriarnation fucked around with this message at 19:14 on Dec 31, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

pgroce posted:

Sorry, I wasn't clear. The motherboard is what died, but I'm planning to replace the MB, CPU and RAM. That's probably still way under $1k, but budget-wise that's what I can spare. I'm really just looking for sweet-spot recommendations on either CPU/motherboard combos or SoC boards with lots of SATA ports and enough CPU/RAM for running some Docker images of media apps. (I didn't mention in my first post, but I'd like to transcode on the server.)

I'd normally just check the OP, but it's old, and I couldn't adapt anything from the last few pages.

I don't remember what's in the current system, probably an i3 or i5 from years back. I'll crack it open and look if it makes a difference, but the only thing I care about keeping are the drives.

Should I just pick up a low-end Xeon and a needs-suiting motherboard then? Any particular recommendations? Thanks for any help!

The X3440 I mentioned is basically just a slow 1st-gen i7 (same feature set+ECC, but the i7-860 is 533MHz faster) and I consider it to be kind of a sweet spot, since it's already about as cheap as you can get and Core 2 is a big step down in multiple ways. Power consumption is fairly low; mine uses 50-55W with four 3.5" disks spinning, and probably less than 30 with no disks.

32GB memory is easily done on the X8SIL using registered memory if you want, though I've been fine with 8 since I'm hosting VMs elsewhere. There are 6 SATA ports onboard and though they are only SATA 2 you'd be unlikely to notice a difference with HDDs. Dual Intel Gigabit Ethernet, too. Anything you need to add to that can probably go in the three PCIe x8 slots. There are other boards from Supermicro in this series with other layouts, but I liked this one for the size and three PCIe slots.

Newer platforms do have performance/power consumption improvements but only incremental unless you're getting into much more expensive product ranges, and my typical CPU load is only around 5% anyway. I'd only really look at something newer if I needed strong CPU performance for something like running professional applications in a VM (or VT-d GPU passthrough for games) or if I needed really fast I/O, more than just 10GbE/SATA3 which I could get with the PCIe slots. Hardware transcoding is also a thing to consider but so far Plex has been able to keep up without it for me.

Eletriarnation fucked around with this message at 13:36 on Jan 1, 2019

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Other than the slim possibility that the replacement is DoA and itself needs to be replaced before the warranty period runs out, I don't think there's any risk in leaving it in the box for a year or two.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

DrDork posted:

You absolutely can! A 2500k isn't going to be the most power-efficient chip you can throw at it

This is true with stock settings, but if you're going to go through with this plan I'd check UEFI to see if undervolting is possible. My cheap Z68 board doesn't allow it so I don't know about the 2500K, but my older X58 board does. I was able to get a "130W" TDP i7-920 which usually ran around 90-100 under load at stock to more like 25-30 at 0.95V and around 2GHz.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I regularly get 110+ MBps on a gigabit link using Samba, so the protocol overhead is not nearly that big of a deal.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I don't know Athena but from what I recall FSP is reliable, and 80+ Platinum is much better than Silver.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I don't have a Synology NAS but I use BitTorrent Sync's free tier with my Windows machines, Android phones and CentOS NAS and it appears to have a Synology app as well.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Bob Morales posted:

Would it be insane to expand a Linux file server using a Synology that’s mounted over the network? (Perhaps even directly connected)

The application running on it isn’t too intense but it can only work with one file source at a time. It’s licensed per server so wen can’t really just move it with paying $$$ and we can’t tell it to store poo poo on a separate server.

6-8 drive unit running raid 10? Access it using what technology?

Doesn't seem crazy to me. I'd probably default to using NFS if that's an option for you, since it seems a bit less fiddly than Samba if you're not dealing with Windows hosts.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

WWWWWWWWWWWWWWWWWW posted:

Thank you! I only found ONE on all of eBay that is single-slot, and of course it's some weird-looking one that has zero branding on it whatsoever:

https://www.ebay.com/itm/NVIDIA-Geoforce-GTX-1050-TI-4GB-GDDR-PCIE-X16-NT3709/232943590864

I mean I've never seen a video card with no brand and no mention anywhere on the card of what the model number is. Does Aliexpress make fake video cards now or something or should I be OK ordering this one? My dumb rear end motherboard has a PCI-E mini slot right under the graphics card slot so I'm limited to exactly this card I linked if I want a 1050

If you search the model number beginning with CN-0DVP... on one of the stickers, you'll find a lot of hits of similar looking cards. It seems to be an OEM card, which makes the lack of branding make a lot more sense - I had a similarly featureless OEM GT640.

It looks like the cooler is higher than single-slot and lines up with the data pins in the PCIe connector, so this still probably won't fit with an x1 in the next slot unless you replace the cooler.

Eletriarnation fucked around with this message at 20:46 on Mar 5, 2019

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

WWWWWWWWWWWWWWWWWW posted:

This is also making me wonder if I should not just make this a gaming PC at this point. It's a Windows 10 machine, with 20 hard drives (13 internal, 7 external), an admittedly old 930 i7 but it's overclocked to 4.1ghz and is about to get a decent video card and 16gb of RAM. Now if all this thing is doing all day is using Sonarr, qbittorrent, and most importantly Plex, is there any reason I can't also use it for games at the same time? The days of me being able to sit in front of a gaming PC are over, but I have NVidia Shield TVs all over the house and Limelight worked great when I tried it years ago. Or is this going to be too much stuff for the PC to do at once? I am not expecting to play AAA 2019 releases with graphics set to high, but streaming Dead Rising 2 to my living room would be pretty awesome.


If you're going to overclock it then you might as well game on it, since that's going to completely blow up your power consumption. If not, try going down to minimum multiplier and see what kind of voltage you can get it stable at. I have an i7-920 that ran great at something like 0.95V IIRC when I clocked it at 2Ghz, and that took power down substantially (~25-30W under load for the CPU alone) from stock settings.

If you are going to game on it, depending on whether the board will support it you might want to try a Westmere Xeon. Dirt cheap on eBay at this point, with 6 cores and 32nm for potentially Sandy Bridge-like overclocks. I use a Gigabyte EX58-DS4 which doesn't officially support Westmere, but it does work in the final BIOS update and I've been running an X5660 @ 4.56 (190x24) for several months now. 4.8 was superficially stable without HT, but used nearly 200W and caused frequent crashes to desktop in some games. It's fine for 1080p and light 4K stuff with a 1060.

Eletriarnation fucked around with this message at 20:22 on Mar 6, 2019

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

WWWWWWWWWWWWWWWWWW posted:

Should this be done on the 1050 as well?


The motherboard is an ECS x58b-a2 v1.0. Is there any way to find out if it will take the Westmere Xeon? Their website does not mention it anywhere at all. And why is that processor cheaper than my 930? :wtf:

Also is the electricity increase that noticeable when overclocking? Are we talking like 11 cents a month or like $7?

I'm not sure about Westmere working with that board, since there are only two BIOS revisions listed on the website and while the newer lists compatibility with 6-core Gulftown it is a bit older and explicitly a consumer chip. Mine at least listed compatibility with Nehalem Xeons and some low-end Westmere ones, which caused me to think "well, if it has microcode for some of the generation why not all?"

I think I pay around $0.11/kWh, and that conveniently works out to 1W costing about $1 a year. I feel like I saved somewhere around 15-20W on idle power underclocking my i7-920, so it's more like a buck or two per month if your electricity is as cheap as mine.

After that I tried a Xeon L5520 which starts at "60W" and undervolted it still further, but I was only able to get total system power consumption on X58 down to around 70W at idle. That included fans, a single HDD and a low-end graphics card since there's no IGP, but even counting all that out the CPU+chipset+DRAM still had to be around 40-45W. I've since moved to a Xeon X3340 in a Supermicro board with a low-end IGP and now I use about 50-55W with four HDDs instead of one, so platform power is probably around 15W. Something even newer like my Poweredge T20's E3-1225v3 would make even more of a difference but it's getting into diminishing returns at the point where the HDDs themselves are >2/3 of total power consumption.

Eletriarnation fucked around with this message at 18:51 on Mar 7, 2019

Adbot
ADBOT LOVES YOU

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
If I recall correctly, it is actually possible to install a desktop environment in WSL and connect to it using VNC - it's just not there by default.

For transferring files into WSL, check /mnt. You should be able to see your host system's drives there listed by letter and accessible for copying files to/from your Linux filesystem.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply