Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Enos Cabell
Nov 3, 2004


Took 3 days to clear and build parity, but got a few more 12TB drives added to Unraid now. One is being used as a second parity drive, and the other was added to the array for 72TB of total storage. I've got the physical space for 4-6 more drives, but I think I'm at the point now where future upgrades will be swapping out 8TB drives for 12TB drives.

Adbot
ADBOT LOVES YOU

Z the IVth
Jan 28, 2009

The trouble with your "expendable machines"
Fun Shoe
Embarrassing confession time.

So after messing about with drive health checking etc, my system rebooted itself overnight and the problem with my un-interactable mystery files revealed itself. I think because the drive had come from another PC the files required administrator privileges to move, and prior to the reboot explorer decided not to show me the popups, making everything hang.

Now the popups appear, I click authorise and everything moves as it should.

There isn't a :blush: big enough to describe my utter failure as a goon.

Yaoi Gagarin
Feb 20, 2014

Z the IVth posted:

Embarrassing confession time.

So after messing about with drive health checking etc, my system rebooted itself overnight and the problem with my un-interactable mystery files revealed itself. I think because the drive had come from another PC the files required administrator privileges to move, and prior to the reboot explorer decided not to show me the popups, making everything hang.

Now the popups appear, I click authorise and everything moves as it should.

There isn't a :blush: big enough to describe my utter failure as a goon.

its ok, windows just be like that sometimes

Nulldevice
Jun 17, 2006
Toilet Rascal

Z the IVth posted:

Embarrassing confession time.

So after messing about with drive health checking etc, my system rebooted itself overnight and the problem with my un-interactable mystery files revealed itself. I think because the drive had come from another PC the files required administrator privileges to move, and prior to the reboot explorer decided not to show me the popups, making everything hang.

Now the popups appear, I click authorise and everything moves as it should.

There isn't a :blush: big enough to describe my utter failure as a goon.

It could be a lot worse man. And like the other dude said, windows be like that sometimes.

politicorific
Sep 15, 2007
I am coming here for a confidence check. I need a critique.

I want to build a rackmount PC and place it inside of something like a NavePoint 15U server cabinet ([https://www.newegg.com/p/2BA-001H-000Z4]) and run Unraid (or goon-suggested option). I want to put a few ideas/limitations upfront and provide my rationale. I thought about posting this in the HomeLab thread, but it didn't seem too appropriate. So here's what I'm thinking:
  • I live in East Asia, so I don't have access to the typical goods/products on Newegg (but Taobao and Aliexpress ship quickly). I also don't have the insider info to purchase used enterprise equipment. So I cannot get any Rosewill products and I also cannot get the Navepoint case, but there are a lot of other interesting cabinets (or unbranded OEM stuff).
  • I also live in a multi-room apartment on a high floor. The "situation" has impacted travel and work, so I'm starting to think about indulging myself by building up my home computing equipment to pursue more computer-related hobbies. My travel budget is getting dumped into this hardware budget. Of course, the significant-other-approval factor is really important, therefore I need to keep the equipment quiet. Plus, having one mini-rack seems much easier to move around, should I ever need to move apartments. The idea of having something the size of a nightstand right next to my desk seems much better than some desktop case and mess of wires. In fact, a nice glass door on it would be really slick looking. Shoot, it might be able to fit a wine cabinet right next to it and even install some rack-mounted audio equipment. I really wonder why this hasn't become more popular... although it probably gives off a late 1960s/early 70s minicomputer feel.
  • Also, I'm considering other locations where I could place the rack; such as in a living room or if I swap rooms for my home office. I would need to run some more Ethernet cable through the existing conduits to make HDMI&USB over Ethernet possible. Linus Tech Tips' videos on Unraid really opened my eyes.
  • For a long time I've wanted to do more virtualization from devices that are on 24x7 and build my own personal cloud. I've also wanted to escape the Apple/Google/Microsoft ecosystems as much as I can. The Windows Linux Subsystem caused a lot of issues on my laptop. Once I started calculating the cost of a new Apple MacMini and a Synology/QNAP NAS, I found I only need to invest a little bit more to gain a lot more freedom and options. Some of the services I'm interested in running are: Ansible, PiHole, Joplin, Git, and Plex, HomeAssistant, and whatever else that will be easy to spin up. I also like the idea of snap-shotting system images for backup. I have had small board computer PCs crash on me dozens of times in the past.
  • I would also like to add a UPS battery backup to the system. Rack mounts for this are convenient and clean. I've experienced power cuts in many countries I've lived in and I expect more in the coming years. I have a 500/250 Internet connection over Fiber/GPON. I need to run some fiber UPC extenders through these conduits to relocate the router if I'm going to put it on battery backup. Of course, I could be wrong that the telco continues to power their devices during a power cut. Mobile LTE service stayed on in the past and it's the same operator. I'm not too confident about running PFsense in a VM, I did run m0n0wall years ago, but I need to keep the Internet up... in fact I've had some outages with the consumer-grade 802.11 access points I have. Who knows, maybe I can get an SFP GPON adapter from my telco provider and a separate 1u server for that purpose.
  • My current PC is a 2-year-old laptop with an i7-1065G7. I also purchased a Razer Core ThunderBolt 3 eGPU and put an older graphics card inside it for gaming. I do have a steam account, but I mostly play strategy games and older games. I do have a 4k monitor (43"), but I usually run games in 1080p or in windowed mode. Hopefully, I'd be able to sell some of this equipment too to fund this.
  • My work PC is a 3-year-old ThinkPad which is getting a little slow and old. However, my company allows employees to run their work Windows 10 image off of virtual machines. This PC doesn't need a 3D GPU; a simple video output will do, but I'd like to have a dedicated video output (HDMI/Display port) for it, rather than a virtual desktop. Perhaps I can forgo a PC refresh and get better performance.
  • I have a decent Aten 4 port USB KVM and my monitor can take up to 6 inputs. (Limited to 4k 60 Hz).
  • I would also like to look into transcoding if it makes sense(by trying to find a used Quadro P2000).
  • Also, correct me if I'm wrong, but by running all these machines off of the same Unraid host, I should be able to transfer files much faster than Gigabit Ethernet, right? That would eliminate the need for expensive networking gear.
  • Rather than have multiple machines, (a dedicated NAS, a dedicated gaming machine, and an army of Raspberry Pis), I thought "why not spend a little bit more and get a lot more functionality?" Plus, I'd like to have direct video output from the machine and not try to pump 60 FPS through a network connection to a desktop software client. Steam Link-type tools seem more like a toy rather than something I would want to use at all times.
  • This will be an Intel build (I've got nothing against AMD). The thing that really opened my eyes about virtualization is that extra cores don't seem to be impacting the performance of many applications. I can benefit by combining multiple machines into one box and running unRaid. For example, take this video comparing different gaming performance of extreme editions. The top-end, double the price 10980EX (18 core/36 thread) has very similar performance to the 'entry-level' 10900XE (10 core/20 threads). https://www.youtube.com/watch?v=r3JRKhEu0SI
    Although, I expect that some will say that this is just a symptom of games having not yet been coded to take advantage of multiple cores or that I've been in the Windows world too long.
    Using these parts means using the x299 chipset.
  • I have not really given a lot of thought into how many hard drives to populate this with yet or how to configure them in UnRaid. I really don't know how much storage I'll need. Most motherboards I'm looking at have between 6-8 SATA ports. This won't be a video editing rig. I need more of a long-term data pool. I read that as of today, SSDs are still 4.5-5x the cost of HDDs. However, it seems like we might reach cost parity in a couple of years, which may make a difference if I upgrade the amount of storage later. While SSDs are silent, BlackBlaze's initial data shows that SSDs have about the same failure rate as HDDs. UnRaid's documentation suggests adding the largest-sized parity drive you can.
  • I also have not thought about cooling. When I got into PC gaming here on SA 20 years ago, water cooling was not reliable... so I'm a bit biased against it. If I have a 3U server case, I should be able to run a decent amount of airflow from front to back, right?
  • Ultimately, being able to shut down VMs to fine-tune available memory, GPUs, and CPU core count is very attractive.


So the major parts would be:
  • i9-10980XE
  • A motherboard with an x299 chipset (I guess I need to dig through the UnRaid forums to make sure the one I buy doesn't have any issues)
  • Case: either a Silver Stone GD08 or Silver Stone CS350. These both seem to be 3U cases.
  • Storage cache pool drives: I have two new unopened 512 GB 2.5" SSDs
  • Ram: Should I buy the biggest DIMM I can, (like a single 64 GB one) and then buy more later to expand, and which would increase the memory channels?

politicorific fucked around with this message at 05:10 on Oct 4, 2021

Yaoi Gagarin
Feb 20, 2014


Various thoughts in no particular order:

- im pretty sure you cant install normal 120mm fans in a 3U case, so youre going to be stuck with LOUD fans. a 4U is probably better from that perspective
- water cooling unnecessary assuming you have enough airflow going through this case
- UPS is a must have
- transfers between VMs over the hypervisor's network bridge are pretty much at RAM speed
- if your gaming machine is a VM on this server you'll have to look at GPU passthrough. doable but takes effort
- you'll definitely want all the VMs to be installed on SSD-based storage (especially the VM you use as a PC), but hard drives will be fine for bulk data
- RAID is not a backup, it exists purely to improve availability (uptime) so plan to have a proper backup for all this stuff
- consider having your main PC as a dedicated desktop anyway so you can still check your email and pay your bills online whenever this homelab inevitably explodes, figuratively or literally
- consider going AMD and getting a 5950X
- personally I would rather do all this on TrueNAS than unraid but that will take away your GPU passthrough option. there's also Proxmox as an option
- do NOT buy a single DIMM of RAM, buy a kit that populates your channels (which is 2 on most platforms)

Happy Dolphin
Apr 12, 2007

:shepface::shepface::shepface:
I'm not sure if this is out of scope for the thread, but I've just received an IBM FlashSystem 5035. After configuring the IP for initial setup, I change the cable to Port 1 and change the IP on my device to be in the same range as the IP of the SAN - However, the NIC ports doesn't show any link or activity lights. I can see the NIC adapters through the Technician UI but no dice getting to the initial configuration.

Nulldevice
Jun 17, 2006
Toilet Rascal

Happy Dolphin posted:

I'm not sure if this is out of scope for the thread, but I've just received an IBM FlashSystem 5035. After configuring the IP for initial setup, I change the cable to Port 1 and change the IP on my device to be in the same range as the IP of the SAN - However, the NIC ports doesn't show any link or activity lights. I can see the NIC adapters through the Technician UI but no dice getting to the initial configuration.

Enterprise SAN thread: https://forums.somethingawful.com/showthread.php?threadid=2943669

Happy Dolphin
Apr 12, 2007

:shepface::shepface::shepface:

Much appreciated. Didn't show up in my search!

Actuarial Fables
Jul 29, 2014

Taco Defender

politicorific posted:

I am coming here for a confidence check. I need a critique.

I want to build a rackmount PC and place it inside of something like a NavePoint 15U server cabinet ([https://www.newegg.com/p/2BA-001H-000Z4])
[*]Of course, the significant-other-approval factor is really important, therefore I need to keep the equipment quiet.
[*]I have not really given a lot of thought into how many hard drives to populate this with yet or how to configure them in UnRaid.
[*]I also have not thought about cooling.
[/list]

I have a similar unbranded server cabinet like the one you linked. You'll need to keep in mind the depth of the cabinet, as that will limit what kind of chassis you can put in it unless you're ok with things sticking way out the back.

Short-depth server cases often have trade-offs that need to be kept in mind, like not being able to use full-length graphics cards if other components are installed, weird/bad fan placements (don't want to cook your HDDs), no externally-accessible HDD bays (hope you're ok with taking everything apart to get to a failed HDD). Going up to 4U can help with finding a case that can fit your needs.

For how many drives you want to put in it, figure out what you want to store, and how long you want to store them. Depending on how many HDD drives you need, it can significantly limit your chassis choice. A few drives won't change much, but 6+ will.

Cooling a power-hungry CPU 24/7 quietly in a 3u chassis is going to be a challenge.

Here's my current setup. The bottom server is my storage server which runs FreeNAS and hosts a Nextcloud instance. It uses a low-powered CPU and runs 24/7, while the other two servers I turn on when needed to keep power usage and noise down. The middle and top servers are virtualization hosts, and the middle one has a gtx1080 that I passthrough for a gaming VM. Splitting up Storage and Virtualization allowed me to fit everything inside the cabinet without sacrificing noise, capacity, or performance.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

It's been mentioned, but most mini racks are for network equipment and it can be difficult to find one that'll fit even a shallow depth server chassis. If it's something that is seen and may need to move with you in the future it's worth looking at audio equipment racks. They're still the standard 19" wide, usually deep enough to fit shallow chassis, can usually be found on castors so they're easy to move, and generally look pretty nice compared to server racks (and they're usually much cheaper than server racks). It just means that you likely won't have a square hole system, you'll have a round hole system. This is really only an issue if you're frequently putting stuff into and out of the rack (can wear out fixed threaded holes with no way to replace them like you could using cage nuts, or you have to deal with unthreaded holes needing a nut on the back which is annoying to install) or if you're looking to use surplus enterprise gear since most of their rail systems are designed to work toollessly with square posts (but you mentioned that you're really not in the used enterprise gear market).

I can vouch for Unraid in terms of ease of use for someone not very familiar with Linux. The ability to add whatever size drives whenever you get them is great as long as you don't need high read/write speeds (an SSD cache drive is a must). Their interface for both VMs and Docker containers make it very simple to get stuff up and running without needed to learn all networking/cli commands for docker settings but also lets you use those advanced features if needed. I haven't tried to do GPU passthrough yet but as long as it's a 10 series or newer Nvidia card it seems much easier than it used to be.

I run a whole bunch of containers, Plex, and HomeAssistant (VM) on a Ryzen 1600AF and it seems to handle the load just fine.

In general I'd say find a case for the server first (you don't have to buy it first, but find a case you know you'd be fine with) and then look for a rack. That way you can be sure you'll be able to at least fit that case instead of getting a rack and then hoping you'll be able to find a case that'll fit. And, as has been mentioned, if you're going to exist in the same room as the server then go 4U, anything less than that means you're going to need server grade fans to push air through the box instead of a normal tower cooler and that means a lot more noise.

Scruff McGruff fucked around with this message at 18:16 on Oct 4, 2021

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The rack I bought is this one https://www.amazon.com/gp/product/B082XVLG91

It's suitable for my Supermicro 3U and the Ubiquiti Dream Machine Pro and I can use a shelf for equipment that isn't rack-capable.

quote:

  • I would also like to look into transcoding if it makes sense(by trying to find a used Quadro P2000).

I would generally advise against this for most setups unless you have a constant stream of Plex users or have high power costs directly attributable to the transcoding. For personal use, I would advocate for a decently powerful CPU and undervolting it - this is what I'm going to do with my 3900x system I use as my desktop now, in fact.

If power stability is a concern, I'd try to make sure that your Internet connection is also protected. I found out that even though my equipment is all on a UPS that somewhere else along the way to my ISP power goes out anyway, so I primarily use a UPS to condition my power and to keep things running smoothly for a power down.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

necrobobsledder posted:

If power stability is a concern, I'd try to make sure that your Internet connection is also protected. I found out that even though my equipment is all on a UPS that somewhere else along the way to my ISP power goes out anyway, so I primarily use a UPS to condition my power and to keep things running smoothly for a power down.

Cable Internet generally requires power to be up on the street to function, all the cable equipment is driven from line power, as opposed to traditional telephone loops (and probably DSL) where it’s all driven from the telco office and they have giant banks of batteries and/or generators, and will operate during a power outage. This is one of the reasons that for some stuff you still can't replace hardline phones with SIP receivers or similar - the hardline phones will be running even during a power outage while the SIP phones will go down when the internet does (or when the power does).

Drive down a suburban street sometime and you'll see the big cable boxes on telephone poles, they have a big green or red light depending on their status. They're just fed off the transformers AFAIK.

Paul MaudDib fucked around with this message at 05:19 on Oct 5, 2021

politicorific
Sep 15, 2007
Thank you all for your replies! It's nice to have somethingawful as a resource.

While digesting your responses, I came across two websites which I felt were very useful. I didn't see these in the first post, but they look reputable enough that including them might be helpful to newcomers (along with any relevant subreddits, servethehome, and levelonetechs).
https://unraid-guides.com/
https://www.serverbuilds.net/

Serverbuilds in particular has builds calculated down to the dollar... This tells me that I have a lot more reading to do to see if it impacts my plans at all.

I'm going to reply/ask questions about a few points some of you made.

VostokProgram posted:

- transfers between VMs over the hypervisor's network bridge are pretty much at RAM speed
This is awesome. There's little likelihood of me being able to build a sprawling network out of my apartment or being able to buy a property that could contain a huge rack which would require 1/2.5/10/25/100G Ethernet, so it's good to know this for the future.

VostokProgram posted:

- consider having your main PC as a dedicated desktop anyway so you can still check your email and pay your bills online whenever this homelab inevitably explodes, figuratively or literally
This made me laugh. Yes, I will likely keep my RPi4 as an emergency backup in case everything goes haywire. A year ago I was considering buying a Lenovo M90n-IoT and thought the solution was 'multiple boxes', but the cost of it was too high for the relatively weak performance.

VostokProgram posted:

raidisnotbackup, ups, dimms
I will follow your advice.

Next,

Actuarial Fables posted:

4U case recommendations

Scruff McGruff posted:

4U case recommendations
Great setup, great feedback! This is exactly what I wanted to see.

Please check my math on my cabinets and cases:
The Silver Stone CS350
https://www.silverstonetek.com/product.php?pid=760&area=en
440 mm (W) x 161.2 mm (H) x 474 mm (D)

Let's use this link as a representative example of the server cabinets I can purchase.
I don't know if there is enough clearance for the server to fit inside without hitting the back.
Do these cases typically have the ability to move the rack posts back and forward?
https://www.alibaba.com/product-det....6bc243a2Ax3FYu

(width x height x depth)
600x(depends on number of Rack Units)x600

There should be enough space to fit this case and future equipment. I guess I need to figure out how tall of a server cabinet I want(600, 800, 1000 mm?). Ha, I just got the idea to replace my other furniture with rackmount cabinets. At least it'd all match.

Actuarial Fables posted:

For how many drives you want to put in it, figure out what you want to store, and how long you want to store them. Depending on how many HDD drives you need, it can significantly limit your chassis choice. A few drives won't change much, but 6+ will.
To answer this question, I need to calculate how many RUs I need. My apartment's utility panel already has a tidy rj-11 phone patch panel, Cat5e patch panel, a 10/100 Ethernet switch, small PDU, 12v power supply, and coaxial cable distribution panel. I have my RPi3 running Pihole and router stuffed in this wall cabinet. I don't need to relocate all of that to this rack. I should probably take a picture just for Internet cred. Above this panel is the electric circuit breaker panel.
  • 4U - The server
  • 2U - The UPS
  • 1U - A PDU (fancy power strip)
  • 1-2Us - some shelves for my router and RPi3 running PiHole.
  • 2U - 120mm Fans?
By my count, I have plenty of RUs leftover(if I get a 15-20RU cabinet). I began to wonder what the length limit on SATA cables is (it's 1 meter), so I thought, what if I just route all those cables to another 4U case. Then after some googling, I found there are SAS cards with external ports for SAS extenders... which led me to several articles like these: https://www.serverbuilds.net/16-bay-das

Going with the SAS extender route won't make me fret so much about cramming all of my stuff in this first case. It also means I don't need to be disappointed about trading HDDs for GPUs. Just thinking now, it'd be nice if there were a 3.5/2.5 drive "vertical" cage meant for SAS expanders which could be mounted in ATX case motherboard mounts. This would be one way to cram additional hard drives into standard cases for DIYers. Maybe one day I'll do some industrial design and get the parts all laser cut.

Yes, there is plenty of space for the future if I get a SAS card and expander. Yesterday I saw a 16 TB Toshiba drive go up on a local site for about 360 USD, $22.5/TB is a little bit more than I want to pay. So maybe for now will just set up something small to run the VMs and figure out the HDD long-term storage a different way.

Actuarial Fables posted:

Cooling a power-hungry CPU 24/7 quietly in a 3u chassis is going to be a challenge.
Yes, I need to read more about GPU-passthrough and find out what the idle power consumption of my chosen parts are. I think setting up a wattage meter on this would be extremely useful.

Scruff McGruff posted:

It just means that you likely won't have a square hole system, you'll have a round hole system. This is really only an issue if you're frequently putting stuff into and out of the rack (can wear out fixed threaded holes with no way to replace them like you could using cage nuts, or you have to deal with unthreaded holes needing a nut on the back which is annoying to install)
This is a nice nugget of experience. I see that both the Silverstone CS350 and RM4000 have round holes, as well as some of the no-name Chinese-made cases I can purchase all have rounded mounting holes. Looks like all the racks have square holes.

Scruff McGruff posted:

I haven't tried to do GPU passthrough yet but as long as it's a 10 series or newer Nvidia card it seems much easier than it used to be.
Thank you for this and for sharing your experience with Unraid. I'm trying to get my hands on a 3060 Ti.

Impotence
Nov 8, 2010
Lipstick Apathy
Do note that most MMOs will ban you for running in a VM. Especially anything even slightly related to Asia, their anticheats are rabid against VMs.

BlankSystemDaemon
Mar 13, 2009




Paul MaudDib posted:

Cable Internet generally requires power to be up on the street to function, all the cable equipment is driven from line power, as opposed to traditional telephone loops (and probably DSL) where it’s all driven from the telco office and they have giant banks of batteries and/or generators, and will operate during a power outage. This is one of the reasons that for some stuff you still can't replace hardline phones with SIP receivers or similar - the hardline phones will be running even during a power outage while the SIP phones will go down when the internet does (or when the power does).

Drive down a suburban street sometime and you'll see the big cable boxes on telephone poles, they have a big green or red light depending on their status. They're just fed off the transformers AFAIK.
The POTS subscriber loop carries -48V but as soon as you add an DSL filter, or worse a DSL box with VoIP, you lose the high-nines dial tone availability that POTS offers.
This is, in theory, a real issue for countries like Denmark where DSL, FTTH, FTTC/N and other systems are so widely deployed that almost nobody has a regular telephone anymore - so, if cellphone services go down and all electricity is out, almost nobody will have any ability to phone.

Biowarfare posted:

Do note that most MMOs will ban you for running in a VM. Especially anything even slightly related to Asia, their anticheats are rabid against VMs.
You can still make a VM into an oblivious sandbox (ie. an isolated environment where it's impossible to tell that there's virtualization involved).

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

politicorific posted:

This is a nice nugget of experience. I see that both the Silverstone CS350 and RM4000 have round holes, as well as some of the no-name Chinese-made cases I can purchase all have rounded mounting holes. Looks like all the racks have square holes.
Yeah, all the items going into the rack will generally have round holes, it's just the rack posts that will have either square or round mounting holes. Square post racks tend to be a bit more flexible because they'll work with regular gear using Cage Nuts which mount a threaded round socket in, or using rack mounting systems that are designed to attach directly into a square post system like the Dell Readyrails (these are specific to their PowerEdge servers though).

Another great Unraid resource is SpaceInvader One. He seems to have a video guide for almost everything you'd want to do in Unraid and does a great job explaining the process.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

politicorific posted:

[*]This will be an Intel build (I've got nothing against AMD). The thing that really opened my eyes about virtualization is that extra cores don't seem to be impacting the performance of many applications. I can benefit by combining multiple machines into one box and running unRaid. For example, take this video comparing different gaming performance of extreme editions. The top-end, double the price 10980EX (18 core/36 thread) has very similar performance to the 'entry-level' 10900XE (10 core/20 threads). https://www.youtube.com/watch?v=r3JRKhEu0SI
Although, I expect that some will say that this is just a symptom of games having not yet been coded to take advantage of multiple cores or that I've been in the Windows world too long.

Others have answered a lot of your other questions, but I think this one got missed. So you're right in the sense that many applications have limited scaling with cores--many older games, in particular, simply don't bother to use more than 2-4 cores, regardless of how many you have available.

That said, if you're thinking about a single large box with a whole bunch of VMs, you will absolutely benefit from a high-core count CPU, since the less over-provisioning of cores between the VMs, the better off you'll be. But it's very possible to push that too high--if you figure 6 for your Windows box, 1-2 for Plex, maybe 4 total for Ansible, PiHole, Joplin, Git, HomeAssistant, etc., you might not need more than 12c, and could likely get away with less given that things like Ansible, Joplin, Git, HA, aren't constantly churning through data.

A i9-10980XE will run you north of $1100 (possibly much more depending on your local markets). A AMD 5950X is faster in most regards and costs only $800, and you get better motherboard expansion capabilities as a bonus. Or, if after you've sketched out what all you plan to stick on the system, you find that 30+ threads is unnecessary, you could consider dropping down to something like a 10850 (10c/20t) for $400 or a 5900X (12c/24t) for $550.

If you have the money and don't care, then sure, go hog wild. But if you're price sensitive it's something to give some thought to.

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
Appears Synology devices will soon see improved performance when it comes to transcoding HDR content via Plex: https://forums.plex.tv/t/plex-media-server-1-24-5-5071-new-transcoder-preview/746527

Sir DonkeyPunch
Mar 23, 2007

I didn't hear no bell

Internet Explorer posted:

You should be able to migrate the disks over like you said, the OS is on the disks themselves, so that part should be very straightforward. Going from RAID-1 to RAID-5 is also easy. Shouldn't be any issues. Of course, you should already have a backup of this data, but yeah.

https://kb.synology.com/en-global/DSM/tutorial/Why_cant_I_change_RAID1_to_SHR_or_vice_versa
https://kb.synology.com/en-us/DSM/help/DSM/StorageManager/storage_pool_change_raid_type?version=6

Thanks, this all seems to have worked fine. Though I sort of wish I knew adding the new drives and shifting to RAID 5 would take 4 days (per the estimates a day in)

CopperHound
Feb 14, 2012

Sir DonkeyPunch posted:

Thanks, this all seems to have worked fine. Though I sort of wish I knew adding the new drives and shifting to RAID 5 would take 4 days (per the estimates a day in)
Welcome to the world of distributed parity.

kiwid
Sep 30, 2013

So... I chickened out on becoming an hero and before doing so I deleted all my zvols and then destroyed the zpool on my freenas server. I didn't wipe the disks. Any way to restore that poo poo?

edit: and by zvol I mean a dataset I think.

kiwid fucked around with this message at 19:54 on Oct 12, 2021

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

kiwid posted:

So... I chickened out on becoming an hero and before doing so I deleted all my zvols and then destroyed the zpool on my freenas server. I didn't wipe the disks. Any way to restore that poo poo?

edit: and by zvol I mean a dataset I think.

Try to recover a zpool: https://www.unixarena.com/2012/07/how-to-recover-destroyed-zfs-storage.html/.

Never actually tried it to find out if that'll pull back the zvol/dataset, though.

BlankSystemDaemon
Mar 13, 2009




So long as you don't attempt to do any write operations to the vdev members after you've destroyed a pool, it still exists and can be reimported - but it'll be in whatever state it was in last.

When you delete any dataset, the associated records gets marked to be freed by a transaction group operation, which in turn triggers the background operation in ZFS that happens in between all the other operations.
So, the only way I can think it could work is if you immediately forcefully exported the pool then used zdb(8) to find the transaction group associated with the free operation, and import the pool before that transaction group using a series of potentially risky flags.
However, since I've never actually tried doing this, it's just a guess and I can't guarentee it'll work nor even that it won't blow up in your face (although you might mitigate this by trying to import it read-only).

ZFS checkpoints were invented to make these sorts of administrative commands possible to roll back, but the downside to checkpoints is that you can only have one, and everything is written in an append-only transaction log until you remove the checkpoint or rewind to it on a subsequent import.

Raymond T. Racing
Jun 11, 2019

The serverbuilds team are a bunch of pricks and it’s going to be heavily Western Hemisphere centric when it comes to parts and availability.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Is there an easy way to look up a given hard disk model number and figure out if it's shingled or not? Are shingled drives still terrible for sustained write speed?

I was curious what 2.5" disk availability was like nowadays, and it looks like the only 5TB 2.5" disks available are Seagate Barracuda ST5000LM000, which are actually pretty drat cheap with not much price premium vs a 3.5" disk at around $140, but are shingled.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Twerk from Home posted:

Is there an easy way to look up a given hard disk model number and figure out if it's shingled or not? Are shingled drives still terrible for sustained write speed?

The formatting is kinda a mess, but https://nascompares.com/2021/04/22/smr-cmr-and-pmr-nas-hard-drives-a-buyers-guide-2021/ has a pretty comprehensive list.

SMR aren't terrible for sustained write, they're terrible for overwriting. If you took a fresh disk, did a continuous data dump to it and then basically used it as a WORM drive, it would preform quite well. But fill it up and start deleting individual files and then trying to write to it and it's gonna thrash itself into the ground.

Buff Hardback posted:

The serverbuilds team are a bunch of pricks and it’s going to be heavily Western Hemisphere centric when it comes to parts and availability.

This is true, but at the very least it's still a good resource because they identify the best bang:buck ratio parts, which often are still the most cost-efficient even in Asian markets (just at different price points). Also a lot of eBay sellers ship international these days, which can be useful for smaller/lighter components like NICs and such.

DrDork fucked around with this message at 05:38 on Oct 14, 2021

Raymond T. Racing
Jun 11, 2019

DrDork posted:

The formatting is kinda a mess, but https://nascompares.com/2021/04/22/smr-cmr-and-pmr-nas-hard-drives-a-buyers-guide-2021/ has a pretty comprehensive list.

SMR aren't terrible for sustained write, they're terrible for overwriting. If you took a fresh disk, did a continuous data dump to it and then basically used it as a WORM drive, it would preform quite well. But fill it up and start deleting individual files and then trying to write to it and it's gonna thrash itself into the ground.

This is true, but at the very least it’s still a good resource because they identify the best bang:buck ratio parts, which often are still the most cost-efficient even in Asian markets (just at different price points). Also a lot of eBay sellers ship international these days, which can be useful for smaller/lighter components like NICs and such.

I’d be more hesitant about that now, they leaned hard into referral links for “supporting the server”, and referral links and best bang for buck are kinda opposing goals in my opinion.

BlankSystemDaemon
Mar 13, 2009




SMR drives are also terrible when doing ZFS resilvering, because they simply don't handle the random I/O read patterns, since they're built for sequential access.

So a ZFS pool consisting of SMR drives may work passably for years if it's WORM storage, right up until one drive fails and you replace it with another. Then you're stuck waiting for a resilver that's gonna take weeks if not months.

Warbird
May 23, 2012

America's Favorite Dumbass

Hey folks, a couple of quick questions for the room.

I've recently found myself with a couple of 10TB drives that I'm no longer using and figure I'll put them towards an offsite backup location. There are plenty of ways to skin that particular cat, but I'd be interested to hear if there are any solutions around making managing an offsite Pi or whatever and keeping things sane there. I could just have it VPN into my network and that may be what I do, but I figured I'd ask around.

For context this is coming off a Synology machine, so I wouldn't be opposed to picking up a very basic 2bay model and having it just sit somewhere back at the homestead and I could do my thing via the web portal. My folks are not tech inclined in the least so the more I can do to be able to troubleshoot and be fault tolerant while 8 hours away, the better.


Question 2: I've got a couple of drives in the process of being added to an existing Syn storage pool and it's taking, understandably, a long rear end time. That's all fine and good, but I did notice that the current process is listed as Step 1 of 2. What's the second step? Repeating the whole shebang with the second added drive?

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



BlankSystemDaemon posted:

SMR drives are also terrible when doing ZFS resilvering, because they simply don't handle the random I/O read patterns, since they're built for sequential access.

So a ZFS pool consisting of SMR drives may work passably for years if it's WORM storage, right up until one drive fails and you replace it with another. Then you're stuck waiting for a resilver that's gonna take weeks if not months.

This is why I ended up going with a non-zfs solution for my NAS. I had 6 8Tb drives I could shuck, and that's hundreds of dollars of "free" storage I don't have to buy by going with a Synology.

Maybe a few files will bitrot over the life of the NAS, but I'm willing to risk that to save 7 or 8 hundred dollars

BlankSystemDaemon
Mar 13, 2009




Nitrousoxide posted:

This is why I ended up going with a non-zfs solution for my NAS. I had 6 8Tb drives I could shuck, and that's hundreds of dollars of "free" storage I don't have to buy by going with a Synology.

Maybe a few files will bitrot over the life of the NAS, but I'm willing to risk that to save 7 or 8 hundred dollars
What, you had 1000GB disks? :v:

All the SMR disks that got submarined into existing product lines by WDC were in 6TB-or-less category, so if you bought 8TB external drives and shucked them you would either get a whitelabel Red or some other equally-good drive for NAS use.
Better yet would be to not give money to companies that submarine inferior technologies into existing product lines to save money.

SMR absolutely has a use - even with ZFS, as I think I've mentioned before that I think they could make a good ZFS snapshot destination (ie. where you just send the raw snapshot directly to the character device, like you would a tape drive) - which means once corrective receive lands, you should be able to fix anything from single files lost to an URE during a rebuild, to entire arrays if you lose too many disks with mirrored, ditto or distributed parity blocks.
In this scenario, it also makes sense to use SMR (with its higher density) to make drives with as high capacity as possible, instead of how SMR is currently used now - which is that they use the higher density to remove a platter, thus saving on manufacturing costs.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe

Warbird posted:

Hey folks, a couple of quick questions for the room.

I've recently found myself with a couple of 10TB drives that I'm no longer using and figure I'll put them towards an offsite backup location. There are plenty of ways to skin that particular cat, but I'd be interested to hear if there are any solutions around making managing an offsite Pi or whatever and keeping things sane there. I could just have it VPN into my network and that may be what I do, but I figured I'd ask around.

For context this is coming off a Synology machine, so I wouldn't be opposed to picking up a very basic 2bay model and having it just sit somewhere back at the homestead and I could do my thing via the web portal. My folks are not tech inclined in the least so the more I can do to be able to troubleshoot and be fault tolerant while 8 hours away, the better.


Question 2: I've got a couple of drives in the process of being added to an existing Syn storage pool and it's taking, understandably, a long rear end time. That's all fine and good, but I did notice that the current process is listed as Step 1 of 2. What's the second step? Repeating the whole shebang with the second added drive?

I would really recommend the synology then. The raspberry pi needs poo poo like a ups and even then it loves to eat SD cards and the troubleshooting for that requires physical access.

If you do want to go the pi route I’d recommend grabbing one of the UPS hats and a sata hat as well, since even the pi 4’s bus is really low, like 35MB/s ime. You can make it more resilient by grabbing an extra sd card and imaging your working install to it, then if it breaks you can at least get family to swap out the broken one and cycle the power on the thing.

In Canadian dollars a synology ds 220j breaks even with the pi + all the hardware to make it a passable NAS, but the synology will be a tank and has an offsite backup solution built in.

Warbird
May 23, 2012

America's Favorite Dumbass

Yeah, I was eyeballing two bay models and I figure you’re right. May have to see if I can source a used one as I don’t need anything particularly fancy for these efforts.

xarph
Jun 18, 2001


The problem with the SMR reds and ZFS wasn't the SMR part; it was the device-managed SMR part where it would write 200ish GB to a CMR area of the platter, then halt/throttle/otherwise go comatose while the drive firmware destaged it all to the SMR area. During this time ZFS is asking it about the status of random blocks all over the disk, and soon the onboard controller just loses its poo poo and ZFS declares the drive offline.

A prior SMR drive would block on IO requiring rewriting of a shingled area, which would be slow but ZFS would wait for it. WD's DM-SMR commits the cardinal sin of lying to ZFS about what data is on what areas of the disk and, crucially, whether it's performing an operation or not.

Crunchy Black
Oct 24, 2017

by Athanatos
Yeah that's a great way to illustrate it; its the same with controllers--you want a ZFS array to have full pass-thru because it wants to talk to the drive directly. In this case, the drive thinks it knows better than the kernel. While for most layperson operations that's probably okay and will only cause a slowdown, its nigh-fatal to many ZFS operations.

BlankSystemDaemon
Mar 13, 2009




xarph posted:

The problem with the SMR reds and ZFS wasn't the SMR part; it was the device-managed SMR part where it would write 200ish GB to a CMR area of the platter, then halt/throttle/otherwise go comatose while the drive firmware destaged it all to the SMR area. During this time ZFS is asking it about the status of random blocks all over the disk, and soon the onboard controller just loses its poo poo and ZFS declares the drive offline.

A prior SMR drive would block on IO requiring rewriting of a shingled area, which would be slow but ZFS would wait for it. WD's DM-SMR commits the cardinal sin of lying to ZFS about what data is on what areas of the disk and, crucially, whether it's performing an operation or not.
You're not gonna see host-managed SMR drives outside of very specific scenarios, though.

Crunchy Black posted:

Yeah that's a great way to illustrate it; its the same with controllers--you want a ZFS array to have full pass-thru because it wants to talk to the drive directly. In this case, the drive thinks it knows better than the kernel. While for most layperson operations that's probably okay and will only cause a slowdown, its nigh-fatal to many ZFS operations.
The way I've found to explain it best is to explain how ZFS is made with the explicit assumption that controllers, disks, and anything that isn't part of the OS, isn't capable of being trusted (because we know from experience in datacenters that they lie) - so everything needs to be tightly controlled via checksums, very rigid cache control, transactional atomic operations, and whatever else the designers could come up with.

Sir Sidney Poitier
Aug 14, 2006

My favourite actor


I got a Synology DS1513+ NAS in 2014 and it's got 5x WD red 3TB drives in it. It's worked great all that time.

Given that it's getting a bit old now, if the NAS enclosure itself dies, can I still take the discs and put them in a new Synology enclosure to recover data? The most important stuff is backed up externally anyway.

Warbird
May 23, 2012

America's Favorite Dumbass

As far as I am aware you can just drop them in another Syn enclosure with little to no issue.

Adbot
ADBOT LOVES YOU

some kinda jackal
Feb 25, 2003

 
 
I'm not sure there's a better thread for this question, but:

I haven't had to think about old SCSI cabling for forever. I have a computer with a DB25 SCSI port, and a SCSI device with a DB25 port. Do I need a special kind of cable, or is any DB25-DB25 straight through cable going to connect these two? A lot (most/all) of the DB25M/DB25M I see are "serial" so I think they swap RX/TX pins and that'll probably be a no go. I'm thinking I could just ebay an old iomega Zip drive cable maybe? Sucks to pay the ebay "vintage equipment" markup but whatever.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply