|
If I wanted to duplicate 18 tb for as cheap as possible, what would you guys recommend as far as drives and build goes? Bonus points for portability since I need to copy these elsewhere and bring them home, however this data doesn’t need to have a ton of durability as far as raiding and high availability goes, I can add that in later, I just need to get a copy now. My initial thoughts are FreeNAS and a micro atx enclosure, but as I’m looking at prices for parts, I wonder if I would just be better off buying a five bay synology expansion enclosure or something? I already have a synology 1518+ nas, so would it just be cheaper from an energy and build cost to do that?
|
# ? Aug 20, 2019 17:13 |
|
|
# ? May 16, 2024 13:40 |
|
To the absolute core of your question: I'd be tempted not to make the decison now and buy two 10TB EasyStores, do not shuck them, format to a reasonably up to date format that you expect to use later (e.g. exFat) and copy the data that way. That would be portable and perhaps get it to your existing synology. Howeverm to your speculation, I agree, if you wanted to build out a NAS cheaply, my personal approach was old hardware with unRaid, but FreeNAS is clearly loved, as is OpenMediaVault and plain old Ubuntu Server. Putting it all together, I think in your case since you're comfortable with the synology, I'd buy the expander and some drives to add to it. In order to move the data, I'd do what I'd suggest in the first paragraph to move the data onto your synology (assuming you have 18tb of space). When you're done you could shuck the drives and add them into your new expander.
|
# ? Aug 20, 2019 18:33 |
|
BangersInMyKnickers posted:Here you go: Thanks, and yup, that’s it. In lspci output, DevCap reports a device’s advertised capabilities, while DevCtl reports what it’s actually configured to. There are many root ports, bridges, and devices (eg a 50GbE NIC) showing up there which have negotiated a 512 byte max payload.
|
# ? Aug 20, 2019 19:14 |
|
Seems like my AMD lab server is linking at 128 bytes. My Dell FreeNAS box with an Intel CPU claims to be capable of 256, but is only doing 128?
|
# ? Aug 20, 2019 19:48 |
|
Heners_UK posted:To the absolute core of your question: I'd be tempted not to make the decison now and buy two 10TB EasyStores, do not shuck them, format to a reasonably up to date format that you expect to use later (e.g. exFat) and copy the data that way. That would be portable and perhaps get it to your existing synology. This is my suggestion as well. Unless you have a desperate need for 18tb contiguous space just chunk it in half onto two 10 or 12tb easystores when they go on sale. You didn't specify speed, unified/contiguous space, interface, etc.
|
# ? Aug 20, 2019 20:32 |
|
priznat posted:There's not so much a definitive guide on what devices have what MPS (max payload size), more just a this is how devices seem to come out. The actual acceptable values (up to 4k) is in the PCIe Spec. Even though I'm out of the space, now, this actually explains a shitpile about why you'd have randomly piss-poor performance over the UPI/QPI bus when in theory you should only experience a 10% or less impedance to traffic passing from proc to proc, assuming you're not overtaxing the bus. (UPI/QPI is a souped up Intel specific implementation of PCIe as best as I could tell.) Crunchy Black fucked around with this message at 22:31 on Aug 20, 2019 |
# ? Aug 20, 2019 22:28 |
|
Crunchy Black posted:Even though I'm out of the space, now, this actually explains a shitpile about why you'd have randomly piss-poor performance over the UPI/QPI bus when in theory you should only experience a 10% or less impedance to traffic passing from proc to proc, assuming you're not overtaxing the bus. (UPI/QPI is a souped up Intel specific implementation of PCIe as best as I could tell.) Yeah the crystal beach DMA is notoriously lovely and offload engines (usually PCIe fpga endpoints) are being used to work around and get much better throughput point to point. And Linux now has a bunch of built in support for p2p as well which helps a lot. I believe the AMD fabric (I forget the name) is also a souped up PCIe, in multi proc the PCIe lanes just become their upi. Waiting for Epyc 2 to become available like
|
# ? Aug 20, 2019 23:06 |
|
priznat posted:Yeah the crystal beach DMA is notoriously lovely and offload engines (usually PCIe fpga endpoints) are being used to work around and get much better throughput point to point. And Linux now has a bunch of built in support for p2p as well which helps a lot. Infinity Fabric, but yeah. Waiting on new Threadripper to replace my Skylake-S for sure. Its also why Cascade Lake-WS in the new Mac Pro for example inexplicably has 64 lanes as opposed to the 48 you get in Cascade Lake-SP...suddenly all the pins needed for proc to proc can be used for a PCIe implementation Crunchy Black fucked around with this message at 23:46 on Aug 20, 2019 |
# ? Aug 20, 2019 23:44 |
|
Seems positively quaint compared to Epyc’s 128
|
# ? Aug 20, 2019 23:53 |
|
It's why all the C4ISR mil guys are clarmoring for it so they can slam a gently caress pile of FPGAs and high-speed fabric into them. Problem is none of the SHB/backplane "long life" guys will touch it because when those same mil-ind guys do their lifecycle analysis the beanpushers freak out when they can't guarantee how long the constituent parts are going to be available and then they end up spending 19x to get custom ASICs built to do the same thing. "COTS" my rear end lol. AMD has a good thing going but while going fabless might have saved them, it definitely hurt them in other ways. Crunchy Black fucked around with this message at 00:00 on Aug 21, 2019 |
# ? Aug 20, 2019 23:58 |
|
The hyperscale dudes don’t care at least as after anything is 2 years old it is positively ancient! It’s pretty amazing the hardware churn they have, they’re still building systems while other people are recycling parts of it for being too old. Also talking about MPS on TLPs, while storage or fast networks likes large TLPs, machine learning stuff is 64byte TLP on the max end and often down at 32 byte or less. Completely opposite requirements which is always fun for system architects.
|
# ? Aug 21, 2019 00:31 |
|
Yeah while I sorta miss being down in the weeds in the x86 hardware and burgeoning not-uber-classified military machine learning world, I'm in the small to mid-scale private devops cloud arena now, though still with Intel CPUs, and I feel like my job is *slightly* more secure. Mostly because until someone figures out a way to reliably multithread app compiling, there will always be a linear need for hardware. Sorry for the not storage related derail, folks.
|
# ? Aug 21, 2019 01:20 |
|
I kinda wish there was an enterprise hardware/deep dives into system protocol thread but I am way too lazy to start one and probably wouldn’t have a lot of critical mass to keep going. But I love every opportunity to gab on stuff especially nvme/pcie eve though I’m a bit limited on good dish based on work and our NDAs with other companies. SSD thread is also good for occasional industry derails too!
|
# ? Aug 21, 2019 01:29 |
|
priznat posted:I kinda wish there was an enterprise hardware/deep dives into system protocol thread but I am way too lazy to start one and probably wouldn’t have a lot of critical mass to keep going. But I love every opportunity to gab on stuff especially nvme/pcie eve though I’m a bit limited on good dish based on work and our NDAs with other companies. We need a Server Goodies thread. Especially if its old vintage servers.
|
# ? Aug 21, 2019 01:41 |
CommieGIR posted:We need a Server Goodies thread. Especially if its old vintage servers. That means I also have a place to winge about the SIX loving memory interconnects: Gen-Z, CCIX, CXL, OpenCAPI, NVLink, Infinity Fabric. Can we maybe just have one or two?
|
|
# ? Aug 21, 2019 09:40 |
|
Crunchy Black posted:Even though I'm out of the space, now, this actually explains a shitpile about why you'd have randomly piss-poor performance over the UPI/QPI bus when in theory you should only experience a 10% or less impedance to traffic passing from proc to proc, assuming you're not overtaxing the bus. (UPI/QPI is a souped up Intel specific implementation of PCIe as best as I could tell.) No, QPI/UPI are profoundly different to PCIe. PCIe is "hey PCI is getting old and lovely, what if we jacked it up a bit, removed the original parallel bus foundation, and hacked in a new SERDES based physical layer?". It has terrible latency (ironically worse than original parallel-bus PCI) and no support for cache coherency. QPI (and its descendant UPI) are what happened when Intel asked the ex-DEC engineers it had inherited to design a clean-sheet big-system cache coherent CPU interconnect. It's designed around minimizing latency to the greatest extent possible for a SERDES-based interconnect, and has all these cool RAS features to boot. I have no idea where you got the idea of "10% or less impedance" from. They're good interconnects, but there's no way QPI/UPI can compete with the nameless ring or mesh busses used for communications between cores and memory controllers within a single CPU die. The industry term for any system which relies on QPI (or other interconnect fabric like AMD Hypertransport or Infinity Fabric) to link together two (or more) clusters of CPUs/memory controllers is NUMA, for non-uniform memory access. The "non-uniform" refers to the giant difference in performance between accessing something relatively local and something remote. There are always performance bugs lurking on NUMA systems. Software which accidentally creates loads of traffic on the higher level interconnect isn't going to have a good time. Some operating system schedulers even try to minimize this with special scheduler algorithms (google NUMA affinity, basic idea is to try to keep threads/processes which might be touching the same data close together).
|
# ? Aug 21, 2019 10:48 |
BobHoward posted:No, QPI/UPI are profoundly different to PCIe. PCIe is "hey PCI is getting old and lovely, what if we jacked it up a bit, removed the original parallel bus foundation, and hacked in a new SERDES based physical layer?". It has terrible latency (ironically worse than original parallel-bus PCI) and no support for cache coherency. QPI (and its descendant UPI) are what happened when Intel asked the ex-DEC engineers it had inherited to design a clean-sheet big-system cache coherent CPU interconnect. It's designed around minimizing latency to the greatest extent possible for a SERDES-based interconnect, and has all these cool RAS features to boot. Speaking of NUMA systems, AMD requires patching to schedulers and so far they've only provided the patches or documentation for it to Microsoft. The worst part about NUMA is how modern CPUs have not just two NUMA zones but some have up to three and four (although AMD deserves credit for fixing that issue in Zen 2).
|
|
# ? Aug 21, 2019 12:54 |
|
This was the layout for my server board, seems like its a single PCIe pipe.
|
# ? Aug 21, 2019 13:03 |
|
What's my best bet for 10-12TB drives, besides shucking WD Easystores? I'm looking to reduce my drive count and increase my storage significantly, so I'm leaning towards a single Z2 pool of 5-6 disks. Ideally I'd like to double my current storage (16TB) with fewer than 8 disks, and slightly better redundancy (currently two 4 disk raidz1, 4x2TB and 4x4TB). Is my best bet just grabbing 6 Easystores next time they're on sale? Or are any of the 12TB disks better than the other? 5x12TB would be preferable to 6x10TB given my current case layout, but 6 disks would probably be my limit for the moment.
|
# ? Aug 23, 2019 22:14 |
|
Shuck or pay double for bare WDs. The 10TB were on sale recently.
|
# ? Aug 23, 2019 22:18 |
|
I've had a raid 0 with two 3tb drives for the last few years for a media server that only gets moderate usage. I was thinking about getting a third 3tb drive and making it raid 5 for some peace of mind in case a drive fails. Is this pointless if both of the original drives have been running for the same amount of time therefore making the likelihood that when they fail they fail at about the same time or while rebuilding?
|
# ? Aug 23, 2019 23:03 |
|
Twoflower posted:I've had a raid 0 with two 3tb drives for the last few years for a media server that only gets moderate usage. I was thinking about getting a third 3tb drive and making it raid 5 for some peace of mind in case a drive fails. Is this pointless if both of the original drives have been running for the same amount of time therefore making the likelihood that when they fail they fail at about the same time or while rebuilding? I wouldn't say pointless. Yes, the two older drives would be more likely to fail during the rebuild when you're relying on the parity data but the same situation applies with any new array you buy after its been in operation for a few years. For an array that small, the likelihood of a double fault scenario is pretty low. I usually advise bumping to raid6 once you're getting in to arrays over ~6 members. Just make sure you controller is scheduled to do read patrols on some interval so it knows when a disk is failing instead of waiting until its catastrophic.
|
# ? Aug 24, 2019 03:49 |
|
My crashplan situation's getting worse -- I have it on a VM that has my big file system NFS mounted, and it's worked great for 6 years, but the VM's started hard locking after I upgraded it to Centos 6.10. My current solution is to just reboot it whenever crashplan notifies me it's disconnected -- probably once a week or so. I'd like to rebuild it, but my understanding is the newer client versions may/may not run headless and definitely don't let you turn off dedupe, which is the only way backup speeds are acceptable. On an 8TB dataset, speeds were down to <10KB/sec with dedupe on but saturated my internet connection once turned off. Guess I'm just wondering if there's anything else out there that would handle 8TB for $10/mo or so.
|
# ? Aug 25, 2019 21:18 |
|
Backblaze unlimited is $6/month (cheaper if you pay by the year). I don't think there is a Linux client though.
|
# ? Aug 25, 2019 22:42 |
|
Splinter posted:Backblaze unlimited is $6/month (cheaper if you pay by the year). I don't think there is a Linux client though. Duplicacy supports backblaze.
|
# ? Aug 25, 2019 23:14 |
|
KS posted:My crashplan situation There's a good docker image for crashplan that virtualizes the interface too, so you can access it via vnc or a browser. I'm using that. If not that, then I'd say just rebuild the VM with a desktop that you remotely access to use the CP app (I use Xubuntu with XRDP for this sort of thing, but purely out of habit rather than reasoned research). Your setup doesn't sound bad.
|
# ? Aug 26, 2019 00:45 |
|
Thermopyle posted:Duplicacy supports backblaze. that’s backblaze b2 which is pay for what you use instead of a flat monthly fee.
|
# ? Aug 26, 2019 01:15 |
|
skull mask mcgee posted:that’s backblaze b2 which is pay for what you use instead of a flat monthly fee. Well poo poo.
|
# ? Aug 26, 2019 01:19 |
|
Thermopyle posted:Well poo poo. It might be cheaper depending on your backup size. B2 is really cheap per GB. $0.005 per month per GB. More than KS was wanting to spend for 8T...
|
# ? Aug 26, 2019 02:14 |
|
BangersInMyKnickers posted:I wouldn't say pointless. Yes, the two older drives would be more likely to fail during the rebuild when you're relying on the parity data but the same situation applies with any new array you buy after its been in operation for a few years. For an array that small, the likelihood of a double fault scenario is pretty low. I usually advise bumping to raid6 once you're getting in to arrays over ~6 members. Just make sure you controller is scheduled to do read patrols on some interval so it knows when a disk is failing instead of waiting until its catastrophic. Thanks for the advice! I think I will give raid5 a try. I got scared off by a bunch of talk about how it's trash but those all seemed to be from about 5-10 years ago and seemed like maybe a little paranoid.
|
# ? Aug 26, 2019 04:48 |
|
RAID is not backup so realistically RAID5 should be fine because if the rebuild fails, it wasn't the only copy of data that can't be recreated, right? With that said I personally wouldn't trust a basic mdraid-style RAID5 these days, even with data that can be recreated, just because it's a pain. ZFS at least will tell me what data it can't recover instead of flushing the whole array down the shitter. Of course, then I abuse this trust of ZFS by running a massive RAID50-equivalent.
|
# ? Aug 26, 2019 06:12 |
IOwnCalculus posted:Of course, then I abuse this trust of ZFS by running a massive RAID50-equivalent.
|
|
# ? Aug 26, 2019 09:25 |
|
I got my u-nas 810a case up and running. It turns out my use case involves a bit more that just bulk storage: I have a lot to learn, but I think I really like unraid. I don't think I would trust it for anything mission critical and it doesn't have the performance of a striped array, but the JBOD implementation pretty much lets me throw whatever hardware I have at it. I can tell it not to split sub-directories across drives, so I should be able to recover stuff off of individual drives if I need to pull them out of the array to read. Adding a cache drive has helped write performance a lot. The SSD does not have any parity, but in my case I can tolerate that risk until the mover process has a chance to run. e: I do hate how unraid's licensing is tied to physical USB sticks. CopperHound fucked around with this message at 18:22 on Aug 27, 2019 |
# ? Aug 27, 2019 18:20 |
|
CopperHound posted:I got my u-nas 810a case up and running. It turns out my use case involves a bit more that just bulk storage: You can add a second cache disk and it will default to a mirrored pair.
|
# ? Aug 27, 2019 22:18 |
|
CopperHound posted:I got my u-nas 810a case up and running. It turns out my use case involves a bit more that just bulk storage: Make sure you check out Sonarr v3 beta. I just upgraded my Unraid server to a Ryzen 7 2800 and I love being able to allocate 2-4 cores per vm. I really need to get Wireguard figured out on unraid.
|
# ? Aug 28, 2019 03:10 |
|
That is a nice dashboard, I need to update I think! How is it to migrate unraid to a new motherboard/cpu? Just move drives over and boot from USB stick?
|
# ? Aug 28, 2019 03:40 |
|
priznat posted:How is it to migrate unraid to a new motherboard/cpu? Just move drives over and boot from USB stick? Yep. The licence is tied to the guid of the USB stick.
|
# ? Aug 28, 2019 03:55 |
|
Heners_UK posted:Yep. The licence is tied to the guid of the USB stick. Nice. No gotchas with recognizing the drives on a new system? As long as they are on a sata controller supported by the OS they should be fine?
|
# ? Aug 28, 2019 04:15 |
|
THF13 posted:You can add a second cache disk and it will default to a mirrored pair.
|
# ? Aug 28, 2019 05:59 |
|
|
# ? May 16, 2024 13:40 |
|
SSDs don't care about stable mounting. I'm lazy as hell and I think I maybe have one properly mounted SSD in any computer that isn't a laptop.
|
# ? Aug 28, 2019 06:12 |