Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
somebody dumped a shitload of HPE ML30 Gen 9s on ebay for $115 shipped

https://www.ebay.com/itm/125245249590

Adbot
ADBOT LOVES YOU

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Moving my 8700k setup into a node 804 case and want to add more SATA ports, any Canadian folks know of a good place to order SAS2008/3008 cards that are reasonably reputable? Amazon.ca has a ton but I'm just worried about sketchy sellers.

a skeleton
Jul 29, 2004

e.pilot posted:

somebody dumped a shitload of HPE ML30 Gen 9s on ebay for $115 shipped

https://www.ebay.com/itm/125245249590

Would that be an option for a Plex server? What all would that entail, what parts would I need?

I'm coming from a Synology DS1815+ that kicked the bucket and I'm trying to figure out a new setup for around $800. I have drives so I'm not factoring that into the final cost.

I was looking at a DS920+, DS923+ or building my own with a Celeron with integrated graphics. I don't need I need HW transcoding at the moment, as I plan to direct stream to my AppleTV 4k. It might be nice to have in the future. I know the 920 is discontinued and costs more than it should, and the 923 doesn't support HW transcoding. From what research I've done, many suggested builds seem overkill for my use case. I appreciated how hands off the Synology was after setup, so I'm weighing how much I actually want to mess with UnRaid or whatever. I feel like with building my own I might end up with an equivalent or better system for less than a Synology.

Any suggestions?

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
The build I posted a few pages back, targeting unRAID, is around 800 for 8 bays and more powerful than a similarly priced Synology.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

a skeleton posted:

Would that be an option for a Plex server? What all would that entail, what parts would I need?


It absolutely would be for plex, and dollar for dollar run circles around a synology

some things to look for:

some more ram, that only comes with 4gb, but a stick of 16gb EEC is like $22 on ebay

as big of a SATA SSD as you can afford for cache, bigger would be better but if you’re trying to stay on a budget a 256gb is around $15-20, 1tb like $50-60

as much as you want/can spend on 4x spinning drives

a used quadro p400 for $50-60 for hardware video transcoding

an unraid license for $60 or some flavor of FOSS NAS OS

well under $300 minus the cost of spinning rust

e.pilot fucked around with this message at 05:33 on Jan 2, 2023

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I'm pretty sure the Pentium G4400 in that system supports Quick Sync, so you'd have hardware transcoding out of the box as long as the HP motherboard isn't doing something screwy to disable the IGP entirely.

If all you're running is Plex, 8GB of memory should be adequate (maybe even 4GB) but I'd probably also get 16GB considering how cheap it is.

Eletriarnation fucked around with this message at 05:46 on Jan 2, 2023

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Eletriarnation posted:

the HP motherboard isn't doing something screwy to disable the IGP entirely.

it is unfortunately

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Ah, well, that sucks. If you are ever going to have more users than yourself on the server you'll probably want hardware transcoding, as a Skylake dual-core probably can't do more than a couple full HD streams in software before it chokes. If you're not going to be transcoding at all or just running one stream, you'll be fine.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
yeah, fortunately the P400 can take just about whatever you can throw at it, I had it doing 4 4k streams at once before I ran out of things to transcode to

a skeleton
Jul 29, 2004

e.pilot posted:

It absolutely would be for plex, and dollar for dollar run circles around a synology

some things to look for:

some more ram, that only comes with 4gb, but a stick of 16gb EEC is like $22 on ebay

as big of a SATA SSD as you can afford for cache, bigger would be better but if you’re trying to stay on a budget a 256gb is around $15-20, 1tb like $50-60

as much as you want/can spend on 4x spinning drives

a used quadro p400 for $50-60 for hardware video transcoding

an unraid license for $60 or some flavor of FOSS NAS OS

well under $300 minus the cost of spinning rust

Thanks for the advice!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
HBA Purchase advice:

Thinking about either this one:

DELL H310 6Gbps HBA FW:P20 LSI 9211-8i IT Mode ZFS FreeNAS unRAID 2* SFF SATA

Or this one:
LSI 9240-8i 6Gbps SAS HBA FW:P20 9211-8i IT Mode ZFS FreeNAS unRAID 2* SFF SATA

Look about the same just where the miniSAS connectors are coming out, top or end of card.

BlankSystemDaemon
Mar 13, 2009



They're both running the 9211-8i firmware so they will behave exactly the same.
Location of the SFF-8087 connectors doesn't matter unless you've got a server where cables come pre-routed and can't be extended.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BlankSystemDaemon posted:

They're both running the 9211-8i firmware so they will behave exactly the same.
Location of the SFF-8087 connectors doesn't matter unless you've got a server where cables come pre-routed and can't be extended.

Yah thinking about where the cable routing holes are on the node 804 to give the cleanest cables, I think the top maybe but will have another look.

That will give me 14 total sata ports including the ones on the motherboard so I should be all set.

a skeleton
Jul 29, 2004
I'm having trouble finding cheap RAM for the ML30 Gen9. The user guide says it doesn't support any kind of RAM other than ECC UDIMM, which are definitely not around $20. Am I missing something? I see a lot of non-ECC and RDIMM options at that price, but the ECC UDIMMs seem to be $60 and up.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Unbuffered DIMMs are almost always more expensive than either RDIMMs or standard DIMMs even when considering used markets. RDIMMs get dumped by the truckloads by all the datacenter upgrades over time and similar for mainstream consumer parts. UDIMMs are stuck in a middle market that is perpetually decreasing in size basically as industry trends continue to bifurcate stronger. Pretty sure that those machines can accept standard DIMMs but perhaps they got rid of that (I used to put standard DIMMs into my old HP Microserver from the first couple generations).

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I have a TrueNAS filer that is the shared storage for a VMware cluster to which I'll be adding storage and I need help deciding on the next disks to add.

How big of a performance difference, IOPS-wise, will there be between adding two 2TB M.2 disks on two different PCIe cards in software RAID1 or adding four 1TB disks on two PCIe cards in a pool (either RAID10 or RAIDz) ?

BlankSystemDaemon
Mar 13, 2009



Agrikk posted:

I have a TrueNAS filer that is the shared storage for a VMware cluster to which I'll be adding storage and I need help deciding on the next disks to add.

How big of a performance difference, IOPS-wise, will there be between adding two 2TB M.2 disks on two different PCIe cards in software RAID1 or adding four 1TB disks on two PCIe cards in a pool (either RAID10 or RAIDz) ?
Four 1TB disks (if they're actually 4 disks and not 2 2TB disks with 1TB partitions) in striped mirrors will always be faster than both mirroring and raidz. However, if two disks in the same stripe die, you lose the data.

It's also important to note that NVMe is also faster than SATA (well, AHCI), because while AHCI only has 1 queue of 32 commands, NVMe has 64k queues with 64k items each.

Also, since it's TrueNAS, I assume that means FreeBSD - in which case you'll want to ensure that it's using the nda(4) driver - which you can do by setting the loader tunable hw.nvme.use_nvd to 0. This means devices that were using nvd(4) will now be using the other driver, which you want because it has an I/O scheduler.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

BlankSystemDaemon posted:

Four 1TB disks (if they're actually 4 disks and not 2 2TB disks with 1TB partitions) in striped mirrors will always be faster than both mirroring and raidz. However, if two disks in the same stripe die, you lose the data.

Raid is not backup, etc.

quote:

It's also important to note that NVMe is also faster than SATA (well, AHCI), because while AHCI only has 1 queue of 32 commands, NVMe has 64k queues with 64k items each.

I didn't know that. I always thought that NVMe was just an incremental improvement to SATA in a different form factor. This is good to know.

quote:

Also, since it's TrueNAS, I assume that means FreeBSD - in which case you'll want to ensure that it's using the nda(4) driver - which you can do by setting the loader tunable hw.nvme.use_nvd to 0. This means devices that were using nvd(4) will now be using the other driver, which you want because it has an I/O scheduler.

And this is a gold mine. Thanks for this!

Now,

Any insights in performance differences between 2x 2TB in RAID-1 vs 4x 1TB in RAID-10?

I'd expect that 4 disks in RAID-10 would be faster due to having double the amount of IOPS available. Am I missing something?

Agrikk fucked around with this message at 06:27 on Jan 3, 2023

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
I bought a bunch of supposedly unsupported stuff for the gen9 that should in the theory work, I can report back once it all comes in.

The gen10 absolutely supports run of the mill non-ecc memory, it’s what I’ve got in it now.

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
I have truenas running on an old dell workstation with an E3 v3 xeon. it has 2 pools, each with 1 pair of hdds (2x 14tb mirror, 2x 8tb mirror)

After maxing out the ram, what's the next thing I can do to improve performance? it's most noticible when I navigate to a network share with lots of small files in finder, or when i try to find a specific video by opening them one after another.

l2arc ssd? zfs "special" vdev? slog ssd?

Yaoi Gagarin
Feb 20, 2014

Wild EEPROM posted:

I have truenas running on an old dell workstation with an E3 v3 xeon. it has 2 pools, each with 1 pair of hdds (2x 14tb mirror, 2x 8tb mirror)

After maxing out the ram, what's the next thing I can do to improve performance? it's most noticible when I navigate to a network share with lots of small files in finder, or when i try to find a specific video by opening them one after another.

l2arc ssd? zfs "special" vdev? slog ssd?

SSDs for a special vdev should help with the navigation part. I believe there's a setting to allow small files to live on it but by default it only holds metadata. Nothing will really help with opening a bunch of videos one by one though

BlankSystemDaemon
Mar 13, 2009



Agrikk posted:

I didn't know that. I always thought that NVMe was just an incremental improvement to SATA in a different form factor. This is good to know.

Any insights in performance differences between 2x 2TB in RAID-1 vs 4x 1TB in RAID-10?

I'd expect that 4 disks in RAID-10 would be faster due to having double the amount of IOPS available. Am I missing something?
There's a few other things about NVMe that are outlined in in this PDF presentation from 2012.

Striped mirrors are faster than a simple mirror, yes - at least in terms of measurable read performance, but whether it's noticable is something I have doubts about when the system is already using NVMe SSDs.

Wild EEPROM posted:

I have truenas running on an old dell workstation with an E3 v3 xeon. it has 2 pools, each with 1 pair of hdds (2x 14tb mirror, 2x 8tb mirror)

After maxing out the ram, what's the next thing I can do to improve performance? it's most noticible when I navigate to a network share with lots of small files in finder, or when i try to find a specific video by opening them one after another.

l2arc ssd? zfs "special" vdev? slog ssd?
VostokProgram is right, the special vdev is for storing (not caching, hence you need at least two devices for mirroring!) metadata (ie. things needed for navigating directories) via allocation classes.
And yes, you can absolutely use allocation classes to store smaller files, it's as simple as using the per-dataset special_small_blocks property that's documented in zfsprops(7)

Opening up one file at a time can be improved by having an L2ARC, but there's a problem you'll run into where you basically need to have a L2ARC the size of your entire pool - and since L2ARC requires mapping LBAs in memory at ~70 bytes per LBA, you very quickly run out of memory.

a skeleton
Jul 29, 2004

e.pilot posted:

I bought a bunch of supposedly unsupported stuff for the gen9 that should in the theory work, I can report back once it all comes in.

The gen10 absolutely supports run of the mill non-ecc memory, it’s what I’ve got in it now.

Sounds good, I grabbed these two sticks to try independently, since i was under budget thanks to your suggestion.

DDR4 ECC UDIMM

DDR4 ECC RDIMM This doesn't work!

Hopefully one will work.

e: for the RDIMM update.

a skeleton fucked around with this message at 00:46 on Jan 24, 2023

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
So what's the reason why desktop CPUs won't support RDIMMs? How much complexity would that add to the memory controller? IIRC the server CPUs can run both? I guess that dumb notch moving about doesn't help?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I'm trying to remember the circuitry required to handle registered DIMMs right but RDIMMs put greater emphasis on the CPU and chipset while UDIMMs put it moreso on the modules to perform more duties. There's better power efficiency, latency, and other factors I can't remember off the top of my head that make RDIMMs more expensive to produce but better in TCO over UDIMMs. I suspect the market would be more aggregately efficient in capital with the market segmentation being mobile / server + desktop rather than this mobile / server / workstation three-way wishful thinking. Like come on, consumers mostly don't care about anything besides a laptop and phone already, just pull the plug on UDIMM and DIMM BS and go full hog RDIMMs y'all in 2023, please.

IOwnCalculus
Apr 2, 2003





Consumers don't even know (let alone care about) ECC or RDIMMs at all, but they do care about cheap and fast. RDIMMs cost more to make and are slower than UDIMMs.

Intel exploits this as a means of product segmentation both ways. They want people who care about ECC to pay for Xeons, not Celerons, and they want to cut the cost of those Celerons as hard as they can.

AMD doesn't go quite that harsh and will technically support ECC on consumer-oriented chips, but full motherboard and BIOS support in that category is spotty.

Edit: since I was mostly beaten, here's a reminder to make sure your server notifications are set up properly. I apparently didn't even know I had a dead drive in my 20-drive sin-against-ZFS array for at least two weeks. It was there on 12/12, it quietly disappeared before 12/23, and I didn't realize it until the pool did another scrub a few days ago.

IOwnCalculus fucked around with this message at 21:32 on Jan 4, 2023

Zorak of Michigan
Jun 10, 2006


Anyone have a sense of how much good SATA powersaving modes do? Right now, I have the drives in my 8-disk RAIDZ2 freeNAS setup set to power saving level 128 (max savings without spindown). I've been thinking of switching my Kodi devices to a photo slide show screen saver, but that would mean that there's always traffic to the NAS and it never goes into power save. I was about to buy a Kill-a-watt and find out directly, but it occurs to me that if non-spindown power saving is generally known to be worthless, the cost of the Kill-a-watt would just be wasted.

BlankSystemDaemon
Mar 13, 2009



IOwnCalculus posted:

Consumers don't even know (let alone care about) ECC or RDIMMs at all, but they do care about cheap and fast. RDIMMs cost more to make and are slower than UDIMMs.

Intel exploits this as a means of product segmentation both ways. They want people who care about ECC to pay for Xeons, not Celerons, and they want to cut the cost of those Celerons as hard as they can.

AMD doesn't go quite that harsh and will technically support ECC on consumer-oriented chips, but full motherboard and BIOS support in that category is spotty.

Edit: since I was mostly beaten, here's a reminder to make sure your server notifications are set up properly. I apparently didn't even know I had a dead drive in my 20-drive sin-against-ZFS array for at least two weeks. It was there on 12/12, it quietly disappeared before 12/23, and I didn't realize it until the pool did another scrub a few days ago.
I'd argue that consumers do care about ECC, they just don't know it.

The original PC specification was built with ECC because IBM used ECC in mainframes, so it never occurred to them that the main memory shouldn't have ECC (similar to how Sun servers were built with ECC, hence why in-memory checksumming is only a debug feature in ZFS, because it was built with the assumption that ECC would be available).
PC clone companies then proceeded to cost-cut the PC spec down to the absolute minimum viable product they could get away with.

This resulted, among other things, in a whole lot of BSODs over the years. And by a whole lot, I mean that Microsoft once estimated that upwards of 40% of crashes were due to bad memory (I believe I've linked the study ITT).
Since consumers do care about BSODs, in the sense that they will loudly complain about them as "computer froze, fix this!!!111oneoneone", I'd say that qualifies as them caring even though they don't know it.

It's also quite interesting how CPU caches still have ECC memory, and how DDR has finally caught up to a point where DRAN suddenly needs ECC to function - which is definitely because the error rates have suddenly gotten too big, and definitely not because the DRAM industry consists of three companies in total.

YerDa Zabam
Aug 13, 2016



Combat Pretzel posted:

So what's the reason why desktop CPUs won't support RDIMMs? How much complexity would that add to the memory controller? IIRC the server CPUs can run both? I guess that dumb notch moving about doesn't help?

$€£

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Combat Pretzel posted:

So what's the reason why desktop CPUs won't support RDIMMs? How much complexity would that add to the memory controller? IIRC the server CPUs can run both? I guess that dumb notch moving about doesn't help?

RDIMMs have some real downsides as well, beyond just cost. They are higher latency than UDIMMs, and I think that even if they were usable on desktop platforms 98% of situations and users would not merit RDIMMs. It sure stinks to be part of the 2% that would, though.

Captain Apollo
Jun 24, 2003

King of the Pilots, CFI
I took the last Gen9 linked in this thread. This will be my first NAS. Ordered the p400 to go with it. Time to get some HDDs I guess!

Clark Nova
Jul 18, 2004

Combat Pretzel posted:

So what's the reason why desktop CPUs won't support RDIMMs? How much complexity would that add to the memory controller? IIRC the server CPUs can run both? I guess that dumb notch moving about doesn't help?

in addition to the actual cost/complexity, Intel loooooves to use this as a market segmentation feature because its one of the few things differentiating a low-end xeon from a desktop chip

IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:

I'd argue that consumers do care about ECC, they just don't know it.

I don't disagree, but for the computers most people buy, there's not even the option available to go find one with ECC. You'd need Intel and AMD to not only enable but actually require ECC support on their mass-market desktop and laptop platforms for ECC to gain a foothold in the consumer market.

Apple is pretty much the only company that could force this issue, but I'm sure they're much happier not including it and being able to make the M1 Air $30 cheaper.

YerDa Zabam
Aug 13, 2016



Yeah, most people don't care if they have soldered ram, let alone registered ecc.
Sadly.

Arson Daily
Aug 11, 2003

I just massively upgraded my NAS from an ancient Synology ds411j with four 2TB drives to a new ds418 with four 14TB disks. I don't need the old NAS but it still works just fine. Is this thing destined to the great e-waste in the sky or is it a viable donation or sale on ebay?

BlankSystemDaemon
Mar 13, 2009



You could always use it as an offline onsite backup for the most important data.

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.
If you take old hard drives to Best Buy for recycling you'll get a coupon for 10% off a new hard drive. It's one coupon per drive and they're pretty lenient. Even flash drives would work. People make a big deal about physically destroying hard drives, but I've never seen evidence of data being recovered after a 3-pass dban or something equivalent.

Boner Wad
Nov 16, 2003

A Bag of Milk posted:

If you take old hard drives to Best Buy for recycling you'll get a coupon for 10% off a new hard drive. It's one coupon per drive and they're pretty lenient. Even flash drives would work. People make a big deal about physically destroying hard drives, but I've never seen evidence of data being recovered after a 3-pass dban or something equivalent.

NIST 800-88 states that spindle drives over 15gb can just have one wipe and it’s sufficient to prevent recovery, even with state of the art recovery methods.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Posting here because I think this thread might have the best ideas:

What's the best frontend for an offline media library viewing? I'm thinking HTPC at cabin, and hoping for a Roku-like or Chromecasting- Plex level of UI comfort, ideally without a keyboard and mouse required. I had an HTPC ages ago, but ditched it for Chromecasts after finding that it was worse to use. Local network connection would be fine, or HTPC connected directly to TV would be fine. Is Kodi still the best option?

Edit: It's 2023, this doesn't need to be fully offline and there's low-speed phone tethering available, but files will be copied in by external disk and I'm hoping to avoid having the internet in the local streaming control flow.

Twerk from Home fucked around with this message at 17:26 on Jan 9, 2023

Adbot
ADBOT LOVES YOU

wolrah
May 8, 2006
what?
Most newer media library applications assume that you have reasonable internet connectivity for pulling in metadata even if the media is stored locally, so yea I'd lean strongly to Kodi for this role. You can scan the media and build a library on an internet connected machine, then export all the metadata as files that get stored right alongside the media which any other Kodi client can read and import. The various download automation tools can usually even create these metadata files themselves, though I don't know how good of a job they do.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply