Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Oysters Autobio
Mar 13, 2017
Getting into NAS / home networking for the first time (though I have built a couple gaming PCs over the years so I'm somewhat familiar since the parts are all the same here).

Here's my setup, wouldn't mind any suggestions or recommendations. The working / initial goal is a jellyfin server that I can share with 5+ friends, and then later explore different self-hosting apps and home automation stuff.

CPU: Intel Core i5-9400 2.9 GHz 6-Core Processor (Used)
CPU Cooler: Cooler Master i71C RGB 37 CFM Rifle Bearing CPU Cooler **
Motherboard: Gigabyte B365M DS3H WIFI Micro ATX LGA1151 Motherboard
Memory: TEAMGROUP T-Force Vulcan Z 16 GB (2 x 8 GB) DDR4-3200 CL16 Memory (Free)
Storage: HP EX900 Plus 1 TB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive (Used)
Storage: Seagate Barracuda Compute 2 TB 3.5" 7200 RPM Internal Hard Drive (Free)
Storage: Western Digital WD_BLACK 4 TB 3.5" 7200 RPM Internal Hard Drive (Free)
Storage: Western Digital WD_BLACK 4 TB 3.5" 7200 RPM Internal Hard Drive (Free)
Case: Thermaltake Core V21 MicroATX Mini Tower Case
Power Supply: EVGA 500 W1 500 W 80+ Certified ATX Power Supply
[
** - Bought a separate cooler because the used CPU I purchased didn't come with one.


Not sure yet if the RAM will be enough but it was free.

I'm also still debating what OS I want to use only because I'm also interested separately in learning docker and other general linux-ish dev skills, so I'm debating whether I want to just go with the easy route and setup unraid or if I want to go full-bore and setup something like proxmox+ubuntu or whatever. I work in data analysis / BI, so not overly technical aside from python scripts, so one part of this whole project is also to start self-learning some general developer skills like docker, linux and have a setup for portfolio building projects.

Bear in mind I've only barely touched anything linux (setup WSL2 on my windows PC, have to ssh into servers now and again) and as much as I've been trying to lean into the overall benefits and the whole "terminal master race" thing, I'm hesitant to just jump headfirst into some super finicky setup with such a steep learning curve that I never even get a media server off the ground.

How easy would it be to setup with unraid for now and then later migrate to a "home server" style virtualized setup?

Oysters Autobio fucked around with this message at 14:33 on Mar 6, 2024

Adbot
ADBOT LOVES YOU

Oysters Autobio
Mar 13, 2017
Friend of mine is offering x2 4TB drives for free (just paying shipping) for my first NAS build (specs in earlier post) they're Western Digital WD_BLACK 4 TB 3.5" 7200 RPM.

Here's the thing, he tested these for bad sectors and these turned out fine, but they are still 10 year old hard drives. He's shipping them over which will cost me about $30CAD.

Would it be worth it for my first build or does the age factor make it a bit dicey?

Oysters Autobio
Mar 13, 2017

Oysters Autobio posted:

Getting into NAS / home networking for the first time (though I have built a couple gaming PCs over the years so I'm somewhat familiar since the parts are all the same here).

Here's my setup, wouldn't mind any suggestions or recommendations. The working / initial goal is a jellyfin server that I can share with 5+ friends, and then later explore different self-hosting apps and home automation stuff.

CPU: Intel Core i5-9400 2.9 GHz 6-Core Processor (Used)
CPU Cooler: Cooler Master i71C RGB 37 CFM Rifle Bearing CPU Cooler **
Motherboard: Gigabyte B365M DS3H WIFI Micro ATX LGA1151 Motherboard
Memory: TEAMGROUP T-Force Vulcan Z 16 GB (2 x 8 GB) DDR4-3200 CL16 Memory (Free)
Storage: HP EX900 Plus 1 TB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive (Used)
Storage: Seagate Barracuda Compute 2 TB 3.5" 7200 RPM Internal Hard Drive (Free)
Storage: Western Digital WD_BLACK 4 TB 3.5" 7200 RPM Internal Hard Drive (Free)
Storage: Western Digital WD_BLACK 4 TB 3.5" 7200 RPM Internal Hard Drive (Free)
Case: Thermaltake Core V21 MicroATX Mini Tower Case
Power Supply: EVGA 500 W1 500 W 80+ Certified ATX Power Supply
[
** - Bought a separate cooler because the used CPU I purchased didn't come with one.


Not sure yet if the RAM will be enough but it was free.

I'm also still debating what OS I want to use only because I'm also interested separately in learning docker and other general linux-ish dev skills, so I'm debating whether I want to just go with the easy route and setup unraid or if I want to go full-bore and setup something like proxmox+ubuntu or whatever. I work in data analysis / BI, so not overly technical aside from python scripts, so one part of this whole project is also to start self-learning some general developer skills like docker, linux and have a setup for portfolio building projects.

Bear in mind I've only barely touched anything linux (setup WSL2 on my windows PC, have to ssh into servers now and again) and as much as I've been trying to lean into the overall benefits and the whole "terminal master race" thing, I'm hesitant to just jump headfirst into some super finicky setup with such a steep learning curve that I never even get a media server off the ground.

How easy would it be to setup with unraid for now and then later migrate to a "home server" style virtualized setup?

Sorry to self-quote but just wanted to follow up here on my question:

If I have any inkling of running a linux OS off the machine for general homelab use, should I just put in the work now and setup TrueNAS from proxmox? Or can I just go straight TrueNAS on baremetal for now and down the road virtualize it if I really want to setup a linux dev box? If this is a better question for the homelab thread let me know since I know it straddles both basically.

(I mentioned unraid in my first post but have since decided to go with TrueNAS.)

Right now my only immediate project is a jellyfin server, but I do want to eventually run other stuff like home assistant or what-not. I've got all the parts ready to assemble and right now it makes sense to just to go with TrueNAS but I also don't want to pigeonhole myself. How difficult would it be to migrate towards a virtualized setup if I decided I wanted to do that? I know TrueNAS can launch apps and docker containers itself but a big part of this home project build is to also learn docker/dev skills for work and such.

Oysters Autobio
Mar 13, 2017

THF13 posted:

If you're using an HBA card it should be pretty easy. You can pass the card through and Truenas will see the drives directly, so you could import your bare metal TrueNAS array into a new virtualized TrueNAS pretty simply.
My recent Unraid build I did on baremetal for a while to make sure it was all stable, then switched to Proxmox this way. Since I was using an nvme for unraid cache/appdata and nvme drives are just pcie devices, I was able to pass that through directly as well. Could switch between booting unraid on bare metal and as a proxmox VM with 0 configuration changes, just choosing which boot device during startup.

If you have sata drives connected directly to the motherboard I think this gets more annoying, as passing those through with proxmox I believe will make them appear as a different drive.

Was considering an HBA card but now if this sort of setup makes it easy then I probably will get one then.

These are really new hardware expansions for me. What do I need to do with my build to look for compatibility and/or performance when picking an HBA card? I'm seeing a decently priced ASR-71605 ADAPTEC 6GB/S SAS SATA PCI-E RAID card but have zero clue how to spec these to my mobo and build-form factor.

Oysters Autobio
Mar 13, 2017
Hmm ok I'll take a look. The one appealing thing about the Adaptec HBA I was looking at was that it wouldn't require any flashing the BIOS, which I'm not keen on because I'm looking to assemble everything over the next week except for HDDs that I'm waiting on, so it'd be nice if I don't have to redo anything when I got the HBA, unless I'm misunderstanding something with them.

Adbot
ADBOT LOVES YOU

Oysters Autobio
Mar 13, 2017

THF13 posted:

I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind.

You'll want one that's capable of being flashed to IT mode, but you'll probably find the seller advertising they pre-flashed it and won't need to do it yourself.

Different cards will use different numbers of PCIE lanes. I think you can use an 8x card in a 4x slot, but I'm unsure if that will only affect total bandwidth (combined max speed of all drives limited based on number of lanes and pcie version), or if it will cause errors or weirdness if you approach that speed.

You connect hard drives to the hba with breakout cables that let you hook up 4 drives to each port on the hba. Depending on the hba the type of connector is different, so make sure you have the right one.

The naming convention of lsi hba's is pretty simple. An LSI 92## card should be a PCIE gen 2 version, and a 93## will be a gen 3. Gen 2 is still probably fine for spinning drives and are extremely cheap. But gen 3 ones have come down in price recently.
After the 9### model number, they will have another number and letter. That number is the number of drives it supports and if the connections are internal or external.
So a 9300-8i is a PCIE gen 3 card with 2 internal connectors that you can connect 8 drives to.
Don't assume PCIE lane requirements from the number of hard drives connections, lookup the specific model details.

You can use more drives than a HBA card supports with a SAS expander card. These are pcie cards that just need power and a connection the the GBA, and act as the hard drive equivalent of a network switch.

EDIT:
As a final note, some HBAs were designed for servers with the expectation that there would be airflow over them and can overheat, so a common recommendation is to zip tie a tiny fan onto its heatsink. Some don't need this, but I don't know anywhere you could check.

I decided to go with the Adaptec, mainly because of reviews on them like here. Snagged used one off ebay for around $35USD.

Seems like the only downside I could see with these adaptec is very much needing to attach a 40mm fan to prevent overheating. But looking at other options, it's a really decent price point for something that can both support up to 16 HDDs, is on PCIe 3.0, and doesn't require re-flashing when switching between IR/IT modes. Someone in that thread also showed an easy way to thread through two unused threads on the heatsink to mount a small 40mm fan, so i like that over any kind of zipties or anything.

Oysters Autobio fucked around with this message at 04:19 on Mar 25, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply