Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Catatron Prime
Aug 23, 2010

IT ME



Toilet Rascal
If I wanted to duplicate 18 tb for as cheap as possible, what would you guys recommend as far as drives and build goes?

Bonus points for portability since I need to copy these elsewhere and bring them home, however this data doesn’t need to have a ton of durability as far as raiding and high availability goes, I can add that in later, I just need to get a copy now.

My initial thoughts are FreeNAS and a micro atx enclosure, but as I’m looking at prices for parts, I wonder if I would just be better off buying a five bay synology expansion enclosure or something? I already have a synology 1518+ nas, so would it just be cheaper from an energy and build cost to do that?

Adbot
ADBOT LOVES YOU

Rooted Vegetable
Jun 1, 2002
To the absolute core of your question: I'd be tempted not to make the decison now and buy two 10TB EasyStores, do not shuck them, format to a reasonably up to date format that you expect to use later (e.g. exFat) and copy the data that way. That would be portable and perhaps get it to your existing synology.

Howeverm to your speculation, I agree, if you wanted to build out a NAS cheaply, my personal approach was old hardware with unRaid, but FreeNAS is clearly loved, as is OpenMediaVault and plain old Ubuntu Server.

Putting it all together, I think in your case since you're comfortable with the synology, I'd buy the expander and some drives to add to it. In order to move the data, I'd do what I'd suggest in the first paragraph to move the data onto your synology (assuming you have 18tb of space). When you're done you could shuck the drives and add them into your new expander.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BangersInMyKnickers posted:

Here you go:

https://pastebin.com/iJqN34Gi

Looks like it might be doing 512 byte TLPs instead of 256?

Thanks, and yup, that’s it. In lspci output, DevCap reports a device’s advertised capabilities, while DevCtl reports what it’s actually configured to. There are many root ports, bridges, and devices (eg a 50GbE NIC) showing up there which have negotiated a 512 byte max payload.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Seems like my AMD lab server is linking at 128 bytes.

My Dell FreeNAS box with an Intel CPU claims to be capable of 256, but is only doing 128?

H110Hawk
Dec 28, 2006

Heners_UK posted:

To the absolute core of your question: I'd be tempted not to make the decison now and buy two 10TB EasyStores, do not shuck them, format to a reasonably up to date format that you expect to use later (e.g. exFat) and copy the data that way. That would be portable and perhaps get it to your existing synology.

This is my suggestion as well. Unless you have a desperate need for 18tb contiguous space just chunk it in half onto two 10 or 12tb easystores when they go on sale.

You didn't specify speed, unified/contiguous space, interface, etc.

Crunchy Black
Oct 24, 2017

by Athanatos

priznat posted:

There's not so much a definitive guide on what devices have what MPS (max payload size), more just a this is how devices seem to come out. The actual acceptable values (up to 4k) is in the PCIe Spec.

Interestingly the Intel DMA engine (crystal beach) will only do 64byte TLPs so it sucks even worse despite supposedly for high throughput DMA transfers.

Even though I'm out of the space, now, this actually explains a shitpile about why you'd have randomly piss-poor performance over the UPI/QPI bus when in theory you should only experience a 10% or less impedance to traffic passing from proc to proc, assuming you're not overtaxing the bus. (UPI/QPI is a souped up Intel specific implementation of PCIe as best as I could tell.)

Crunchy Black fucked around with this message at 22:31 on Aug 20, 2019

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Crunchy Black posted:

Even though I'm out of the space, now, this actually explains a shitpile about why you'd have randomly piss-poor performance over the UPI/QPI bus when in theory you should only experience a 10% or less impedance to traffic passing from proc to proc, assuming you're not overtaxing the bus. (UPI/QPI is a souped up Intel specific implementation of PCIe as best as I could tell.)

Yeah the crystal beach DMA is notoriously lovely and offload engines (usually PCIe fpga endpoints) are being used to work around and get much better throughput point to point. And Linux now has a bunch of built in support for p2p as well which helps a lot.

I believe the AMD fabric (I forget the name) is also a souped up PCIe, in multi proc the PCIe lanes just become their upi.

Waiting for Epyc 2 to become available like :f5:

Crunchy Black
Oct 24, 2017

by Athanatos

priznat posted:

Yeah the crystal beach DMA is notoriously lovely and offload engines (usually PCIe fpga endpoints) are being used to work around and get much better throughput point to point. And Linux now has a bunch of built in support for p2p as well which helps a lot.

I believe the AMD fabric (I forget the name) is also a souped up PCIe, in multi proc the PCIe lanes just become their upi.

Waiting for Epyc 2 to become available like :f5:

Infinity Fabric, but yeah. Waiting on new Threadripper to replace my Skylake-S for sure.

Its also why Cascade Lake-WS in the new Mac Pro for example inexplicably has 64 lanes as opposed to the 48 you get in Cascade Lake-SP...suddenly all the pins needed for proc to proc can be used for a PCIe implementation :downs:

Crunchy Black fucked around with this message at 23:46 on Aug 20, 2019

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Seems positively quaint compared to Epyc’s 128 :haw:

Crunchy Black
Oct 24, 2017

by Athanatos
It's why all the C4ISR mil guys are clarmoring for it so they can slam a gently caress pile of FPGAs and high-speed fabric into them. Problem is none of the SHB/backplane "long life" guys will touch it because when those same mil-ind guys do their lifecycle analysis the beanpushers freak out when they can't guarantee how long the constituent parts are going to be available and then they end up spending 19x to get custom ASICs built to do the same thing. "COTS" my rear end lol.

AMD has a good thing going but while going fabless might have saved them, it definitely hurt them in other ways.

Crunchy Black fucked around with this message at 00:00 on Aug 21, 2019

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
The hyperscale dudes don’t care at least as after anything is 2 years old it is positively ancient!

It’s pretty amazing the hardware churn they have, they’re still building systems while other people are recycling parts of it for being too old.

Also talking about MPS on TLPs, while storage or fast networks likes large TLPs, machine learning stuff is 64byte TLP on the max end and often down at 32 byte or less. Completely opposite requirements which is always fun for system architects.

Crunchy Black
Oct 24, 2017

by Athanatos
Yeah while I sorta miss being down in the weeds in the x86 hardware and burgeoning not-uber-classified military machine learning world, I'm in the small to mid-scale private devops cloud arena now, though still with Intel CPUs, and I feel like my job is *slightly* more secure. Mostly because until someone figures out a way to reliably multithread app compiling, there will always be a linear need for hardware.

Sorry for the not storage related derail, folks.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I kinda wish there was an enterprise hardware/deep dives into system protocol thread but I am way too lazy to start one and probably wouldn’t have a lot of critical mass to keep going. But I love every opportunity to gab on stuff especially nvme/pcie eve though I’m a bit limited on good dish based on work and our NDAs with other companies.

SSD thread is also good for occasional industry derails too!

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

priznat posted:

I kinda wish there was an enterprise hardware/deep dives into system protocol thread but I am way too lazy to start one and probably wouldn’t have a lot of critical mass to keep going. But I love every opportunity to gab on stuff especially nvme/pcie eve though I’m a bit limited on good dish based on work and our NDAs with other companies.

SSD thread is also good for occasional industry derails too!

We need a Server Goodies thread. Especially if its old vintage servers.

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

We need a Server Goodies thread. Especially if its old vintage servers.
That sounds like an excellent idea, holy poo poo.
That means I also have a place to winge about the SIX loving memory interconnects: Gen-Z, CCIX, CXL, OpenCAPI, NVLink, Infinity Fabric. Can we maybe just have one or two?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Crunchy Black posted:

Even though I'm out of the space, now, this actually explains a shitpile about why you'd have randomly piss-poor performance over the UPI/QPI bus when in theory you should only experience a 10% or less impedance to traffic passing from proc to proc, assuming you're not overtaxing the bus. (UPI/QPI is a souped up Intel specific implementation of PCIe as best as I could tell.)

No, QPI/UPI are profoundly different to PCIe. PCIe is "hey PCI is getting old and lovely, what if we jacked it up a bit, removed the original parallel bus foundation, and hacked in a new SERDES based physical layer?". It has terrible latency (ironically worse than original parallel-bus PCI) and no support for cache coherency. QPI (and its descendant UPI) are what happened when Intel asked the ex-DEC engineers it had inherited to design a clean-sheet big-system cache coherent CPU interconnect. It's designed around minimizing latency to the greatest extent possible for a SERDES-based interconnect, and has all these cool RAS features to boot.

I have no idea where you got the idea of "10% or less impedance" from. They're good interconnects, but there's no way QPI/UPI can compete with the nameless ring or mesh busses used for communications between cores and memory controllers within a single CPU die. The industry term for any system which relies on QPI (or other interconnect fabric like AMD Hypertransport or Infinity Fabric) to link together two (or more) clusters of CPUs/memory controllers is NUMA, for non-uniform memory access. The "non-uniform" refers to the giant difference in performance between accessing something relatively local and something remote.

There are always performance bugs lurking on NUMA systems. Software which accidentally creates loads of traffic on the higher level interconnect isn't going to have a good time. Some operating system schedulers even try to minimize this with special scheduler algorithms (google NUMA affinity, basic idea is to try to keep threads/processes which might be touching the same data close together).

BlankSystemDaemon
Mar 13, 2009



BobHoward posted:

No, QPI/UPI are profoundly different to PCIe. PCIe is "hey PCI is getting old and lovely, what if we jacked it up a bit, removed the original parallel bus foundation, and hacked in a new SERDES based physical layer?". It has terrible latency (ironically worse than original parallel-bus PCI) and no support for cache coherency. QPI (and its descendant UPI) are what happened when Intel asked the ex-DEC engineers it had inherited to design a clean-sheet big-system cache coherent CPU interconnect. It's designed around minimizing latency to the greatest extent possible for a SERDES-based interconnect, and has all these cool RAS features to boot.

I have no idea where you got the idea of "10% or less impedance" from. They're good interconnects, but there's no way QPI/UPI can compete with the nameless ring or mesh busses used for communications between cores and memory controllers within a single CPU die. The industry term for any system which relies on QPI (or other interconnect fabric like AMD Hypertransport or Infinity Fabric) to link together two (or more) clusters of CPUs/memory controllers is NUMA, for non-uniform memory access. The "non-uniform" refers to the giant difference in performance between accessing something relatively local and something remote.

There are always performance bugs lurking on NUMA systems. Software which accidentally creates loads of traffic on the higher level interconnect isn't going to have a good time. Some operating system schedulers even try to minimize this with special scheduler algorithms (google NUMA affinity, basic idea is to try to keep threads/processes which might be touching the same data close together).
Wasn't InfiniBand supposed to replace PCI(-ex)? And maybe Gen-Z is supposed to do the same, but this time for real, now that OmniPath has been killed.

Speaking of NUMA systems, AMD requires patching to schedulers and so far they've only provided the patches or documentation for it to Microsoft.
The worst part about NUMA is how modern CPUs have not just two NUMA zones but some have up to three and four (although AMD deserves credit for fixing that issue in Zen 2).

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug


This was the layout for my server board, seems like its a single PCIe pipe.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
What's my best bet for 10-12TB drives, besides shucking WD Easystores? I'm looking to reduce my drive count and increase my storage significantly, so I'm leaning towards a single Z2 pool of 5-6 disks. Ideally I'd like to double my current storage (16TB) with fewer than 8 disks, and slightly better redundancy (currently two 4 disk raidz1, 4x2TB and 4x4TB).

Is my best bet just grabbing 6 Easystores next time they're on sale? Or are any of the 12TB disks better than the other? 5x12TB would be preferable to 6x10TB given my current case layout, but 6 disks would probably be my limit for the moment.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib
Shuck or pay double for bare WDs. The 10TB were on sale recently.

Twoflower
May 4, 2006

But what is the Internet? Is it a computer with the Internet inside?
I've had a raid 0 with two 3tb drives for the last few years for a media server that only gets moderate usage. I was thinking about getting a third 3tb drive and making it raid 5 for some peace of mind in case a drive fails. Is this pointless if both of the original drives have been running for the same amount of time therefore making the likelihood that when they fail they fail at about the same time or while rebuilding?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Twoflower posted:

I've had a raid 0 with two 3tb drives for the last few years for a media server that only gets moderate usage. I was thinking about getting a third 3tb drive and making it raid 5 for some peace of mind in case a drive fails. Is this pointless if both of the original drives have been running for the same amount of time therefore making the likelihood that when they fail they fail at about the same time or while rebuilding?

I wouldn't say pointless. Yes, the two older drives would be more likely to fail during the rebuild when you're relying on the parity data but the same situation applies with any new array you buy after its been in operation for a few years. For an array that small, the likelihood of a double fault scenario is pretty low. I usually advise bumping to raid6 once you're getting in to arrays over ~6 members. Just make sure you controller is scheduled to do read patrols on some interval so it knows when a disk is failing instead of waiting until its catastrophic.

KS
Jun 10, 2003
Outrageous Lumpwad
My crashplan situation's getting worse -- I have it on a VM that has my big file system NFS mounted, and it's worked great for 6 years, but the VM's started hard locking after I upgraded it to Centos 6.10.

My current solution is to just reboot it whenever crashplan notifies me it's disconnected -- probably once a week or so. I'd like to rebuild it, but my understanding is the newer client versions may/may not run headless and definitely don't let you turn off dedupe, which is the only way backup speeds are acceptable. On an 8TB dataset, speeds were down to <10KB/sec with dedupe on but saturated my internet connection once turned off.

Guess I'm just wondering if there's anything else out there that would handle 8TB for $10/mo or so.

Splinter
Jul 4, 2003
Cowabunga!
Backblaze unlimited is $6/month (cheaper if you pay by the year). I don't think there is a Linux client though.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Splinter posted:

Backblaze unlimited is $6/month (cheaper if you pay by the year). I don't think there is a Linux client though.

Duplicacy supports backblaze.

Rooted Vegetable
Jun 1, 2002

KS posted:

My crashplan situation

There's a good docker image for crashplan that virtualizes the interface too, so you can access it via vnc or a browser. I'm using that.

If not that, then I'd say just rebuild the VM with a desktop that you remotely access to use the CP app (I use Xubuntu with XRDP for this sort of thing, but purely out of habit rather than reasoned research). Your setup doesn't sound bad.

susan b buffering
Nov 14, 2016

Thermopyle posted:

Duplicacy supports backblaze.

that’s backblaze b2 which is pay for what you use instead of a flat monthly fee.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

skull mask mcgee posted:

that’s backblaze b2 which is pay for what you use instead of a flat monthly fee.

Well poo poo.

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

Thermopyle posted:

Well poo poo.

It might be cheaper depending on your backup size. B2 is really cheap per GB.

$0.005 per month per GB. More than KS was wanting to spend for 8T...

Twoflower
May 4, 2006

But what is the Internet? Is it a computer with the Internet inside?

BangersInMyKnickers posted:

I wouldn't say pointless. Yes, the two older drives would be more likely to fail during the rebuild when you're relying on the parity data but the same situation applies with any new array you buy after its been in operation for a few years. For an array that small, the likelihood of a double fault scenario is pretty low. I usually advise bumping to raid6 once you're getting in to arrays over ~6 members. Just make sure you controller is scheduled to do read patrols on some interval so it knows when a disk is failing instead of waiting until its catastrophic.

Thanks for the advice! I think I will give raid5 a try. I got scared off by a bunch of talk about how it's trash but those all seemed to be from about 5-10 years ago and seemed like maybe a little paranoid.

IOwnCalculus
Apr 2, 2003





RAID is not backup so realistically RAID5 should be fine because if the rebuild fails, it wasn't the only copy of data that can't be recreated, right?

With that said I personally wouldn't trust a basic mdraid-style RAID5 these days, even with data that can be recreated, just because it's a pain. ZFS at least will tell me what data it can't recover instead of flushing the whole array down the shitter.

Of course, then I abuse this trust of ZFS by running a massive RAID50-equivalent.

BlankSystemDaemon
Mar 13, 2009



IOwnCalculus posted:

Of course, then I abuse this trust of ZFS by running a massive RAID50-equivalent.
I've seen people run RAID 510-equivalents on ZFS. orz

CopperHound
Feb 14, 2012

I got my u-nas 810a case up and running. It turns out my use case involves a bit more that just bulk storage:


I have a lot to learn, but I think I really like unraid. I don't think I would trust it for anything mission critical and it doesn't have the performance of a striped array, but the JBOD implementation pretty much lets me throw whatever hardware I have at it. I can tell it not to split sub-directories across drives, so I should be able to recover stuff off of individual drives if I need to pull them out of the array to read. Adding a cache drive has helped write performance a lot. The SSD does not have any parity, but in my case I can tolerate that risk until the mover process has a chance to run.

e: I do hate how unraid's licensing is tied to physical USB sticks.

CopperHound fucked around with this message at 18:22 on Aug 27, 2019

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.

CopperHound posted:

I got my u-nas 810a case up and running. It turns out my use case involves a bit more that just bulk storage:

I have a lot to learn, but I think I really like unraid. I don't think I would trust it for anything mission critical and it doesn't have the performance of a striped array, but the JBOD implementation pretty much lets me throw whatever hardware I have at it. I can tell it not to split sub-directories across drives, so I should be able to recover stuff off of individual drives if I need to pull them out of the array to read. Adding a cache drive has helped write performance a lot. The SSD does not have any parity, but in my case I can tolerate that risk until the mover process has a chance to run.

e: I do hate how unraid's licensing is tied to physical USB sticks.

You can add a second cache disk and it will default to a mirrored pair.

Corb3t
Jun 7, 2003

CopperHound posted:

I got my u-nas 810a case up and running. It turns out my use case involves a bit more that just bulk storage:


I have a lot to learn, but I think I really like unraid. I don't think I would trust it for anything mission critical and it doesn't have the performance of a striped array, but the JBOD implementation pretty much lets me throw whatever hardware I have at it. I can tell it not to split sub-directories across drives, so I should be able to recover stuff off of individual drives if I need to pull them out of the array to read. Adding a cache drive has helped write performance a lot. The SSD does not have any parity, but in my case I can tolerate that risk until the mover process has a chance to run.

e: I do hate how unraid's licensing is tied to physical USB sticks.

Make sure you check out Sonarr v3 beta.

I just upgraded my Unraid server to a Ryzen 7 2800 and I love being able to allocate 2-4 cores per vm. I really need to get Wireguard figured out on unraid.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
That is a nice dashboard, I need to update I think!

How is it to migrate unraid to a new motherboard/cpu? Just move drives over and boot from USB stick?

Rooted Vegetable
Jun 1, 2002

priznat posted:

How is it to migrate unraid to a new motherboard/cpu? Just move drives over and boot from USB stick?

Yep. The licence is tied to the guid of the USB stick.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Heners_UK posted:

Yep. The licence is tied to the guid of the USB stick.

Nice. No gotchas with recognizing the drives on a new system? As long as they are on a sata controller supported by the OS they should be fine?

CopperHound
Feb 14, 2012

THF13 posted:

You can add a second cache disk and it will default to a mirrored pair.
I might have to see if I can mount a second internal drive in this case.... Or find a 512gb SATA DOM that isn't $$$$$$$$.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





SSDs don't care about stable mounting. I'm lazy as hell and I think I maybe have one properly mounted SSD in any computer that isn't a laptop.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply