Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



fatman1683 posted:

Thanks! This looks like a much more comprehensive tool. Is there a good method for estimating rebuild speed? I know it's affected by a lot of factors, but is there a 'safe' number for 7.2k SAS disks I can use?
It's more a question of the CPU found in the system depending on the ZFS version, than anything else.

If you check my post history ITT you'll see quite a few links to ways that the operations involved in resilvering have been vectorized in ZFS. If the ZFS implementation and CPU is new enough, it's entirely possible that the resilver speed will be limited by the write speed of the disk you're resilvering onto.
Anecdotally, real-world experience suggests to me that raidz3 rebuilds are no more than 3-4 times slower than that achievable by simple mirroring, but that's just based on half-remembered stuff from when I ran storage servers professionally.
Whether that's the number you want to aim for, though, is harder to judge. It might be worth trying to use the worst possible estimates for everything - and even in the worst-possible case where every read from every disk is random, you're still going to get about 10MBps.

That, by the way, is why draid exists; it uses distributed spares in addition to distributed parity - so the resilver speed is much faster.

BlankSystemDaemon fucked around with this message at 18:26 on May 6, 2022

Adbot
ADBOT LOVES YOU

fatman1683
Jan 8, 2004
.

BlankSystemDaemon posted:

It's more a question of the CPU found in the system depending on the ZFS version, than anything else.

If you check my post history ITT you'll see quite a few links to ways that the operations involved in resilvering have been vectorized in ZFS. If the ZFS implementation and CPU is new enough, it's entirely possible that the resilver speed will be limited by the write speed of the disk you're resilvering onto.
Anecdotally, real-world experience suggests to me that raidz3 rebuilds are no more than 3-4 times slower than that achievable by simple mirroring, but that's just based on half-remembered stuff from when I ran storage servers professionally.
Whether that's the number you want to aim for, though, is harder to judge. It might be worth trying to use the worst possible estimates for everything - and even in the worst-possible case where every read from every disk is random, you're still going to get about 10MBps.

Thanks! One thing I couldn't find an answer to is whether the resilvering operation is multithreaded. My current plan is to turn my old ESX box into the FreeNAS server, which is running on a pair of E5-2603 V4s, 1.7GHz 6-core. Slow as dogshit, but enough cores to be functional. Do you think this is going to be a significant bottleneck to the resilver and worth an upgrade, or should it be capable of capping disk write speed?


BlankSystemDaemon posted:

That, by the way, is why draid exists; it uses distributed spares in addition to distributed parity - so the resilver speed is much faster.

Ok, so according to these numbers, an 11-drive RAIDZ3 vdev seems like it would be a good balance of performance, capacity, and redundancy. Would adding a 12th drive as a draid spare be a good idea here? I could theoretically build a stripe set of two 11-drive Z3s, each with a spare, and fill up a 24-bay chassis. Looks like I misunderstood how draid works, and it seems like it's not really intended for this use case.

fatman1683 fucked around with this message at 01:24 on May 7, 2022

BlankSystemDaemon
Mar 13, 2009



fatman1683 posted:

Thanks! One thing I couldn't find an answer to is whether the resilvering operation is multithreaded. My current plan is to turn my old ESX box into the FreeNAS server, which is running on a pair of E5-2603 V4s, 1.7GHz 6-core. Slow as dogshit, but enough cores to be functional. Do you think this is going to be a significant bottleneck to the resilver and worth an upgrade, or should it be capable of capping disk write speed?

Ok, so according to these numbers, an 11-drive RAIDZ3 vdev seems like it would be a good balance of performance, capacity, and redundancy. Would adding a 12th drive as a draid spare be a good idea here? I could theoretically build a stripe set of two 11-drive Z3s, each with a spare, and fill up a 24-bay chassis. Looks like I misunderstood how draid works, and it seems like it's not really intended for this use case.
So it depends on how old the ZFS implementation in FreeNAS is (which I don't know), but AVX2 vectorized raidz resilver was added back on Nov 29, 2016, so even if it is single-threaded, it shouldn't be taking up much CPUtime since your CPU has AVX2.

Again, I have to reiterate that I don't think "capping disk write speed" is what you should be expecting during a resilver, It assumes that every single record in your pool is written sequentially, that there's no single stray read from anything else on the system, that all disks are 100% functional, and that they don't have any malignancies in their firmware.

As for draid, 12-disk raidz3 is over the point at which I'd be considering switching, as I think the recommendation is to have raidz go no wider than 9 disks.
I have two 15-wide draid3:11d:1s vdevs in a pool that I use as a local offline backup (the server in question also acts as a buildserver, occasionally, when I'm working on FreeBSD, because it has 2x Xeon E5-2667v2 and 260GB memory).

If you wanna read more about it, I suggest zpoolconcepts(7).

BlankSystemDaemon fucked around with this message at 08:46 on May 7, 2022

fatman1683
Jan 8, 2004
.

BlankSystemDaemon posted:

So it depends on how old the ZFS implementation in FreeNAS is (which I don't know), but AVX2 vectorized raidz resilver was added back on Nov 29, 2016, so even if it is single-threaded, it shouldn't be taking up much CPUtime since your CPU has AVX2.

Again, I have to reiterate that I don't think "capping disk write speed" is what you should be expecting during a resilver, It assumes that every single record in your pool is written sequentially, that there's no single stray read from anything else on the system, that all disks are 100% functional, and that they don't have any malignancies in their firmware.

I'm definitely not expecting to reach that speed, but if it's theoretically achievable I can use a conservative figure derived from that as a basis for calculating risk of data loss. I probably will run it as-is and do some benchmarks on the pool before I move data over, if it's not adequate I can upgrade the CPUs at that point.

BlankSystemDaemon posted:

As for draid, 12-disk raidz3 is over the point at which I'd be considering switching, as I think the recommendation is to have raidz go no wider than 9 disks.
I have two 15-wide draid3:11d:1s vdevs in a pool that I use as a local offline backup (the server in question also acts as a buildserver, occasionally, when I'm working on FreeBSD, because it has 2x Xeon E5-2667v2 and 260GB memory).

If you wanna read more about it, I suggest zpoolconcepts(7).

Ok thanks, I'll do some more research. I'm still a few months away from building this (hooray for unemployment!).

BlankSystemDaemon
Mar 13, 2009



fatman1683 posted:

I'm definitely not expecting to reach that speed, but if it's theoretically achievable I can use a conservative figure derived from that as a basis for calculating risk of data loss. I probably will run it as-is and do some benchmarks on the pool before I move data over, if it's not adequate I can upgrade the CPUs at that point.

Ok thanks, I'll do some more research. I'm still a few months away from building this (hooray for unemployment!).
Again, a conservative estimate would be the lowest speed (ie. completely randomized I/O, which is in the 8-10MBps range) for how long it takes to rebuild. If you estimate using the purely sequential I/O (~160MBps on modern spinning rust), the estimate is more like the best possible.
It might make sense to do both a worst-case and best-case estimate since there's close to a factor of ten in variance.

I'd recommend running diskinfo -cit <device> on the disks before creating the pool and saving the numbers (preferably more than once, since there can be some variance). That way you have a baseline for if you ever need to test whether a disk is failing.
You probably also wanna do some burn-in testing if the disks are new.

:(:hf::(

BlankSystemDaemon fucked around with this message at 16:36 on May 7, 2022

Medullah
Aug 14, 2003

FEAR MY SHARK ROCKET IT REALLY SUCKS AND BLOWS
Been a while since I played with a NAS. I currently have a FTP server, Plex server and Qbittorent running on my main PC which is fine for the most part, but in summer my office gets hot as hell, which sucks when I work from home.

Been thinking of getting something different that can do all that and attach my storage to it, and set it up in the basement so I don't leave my PC running at all times.

Any recommendations?

BlankSystemDaemon
Mar 13, 2009



Medullah posted:

Been a while since I played with a NAS. I currently have a FTP server, Plex server and Qbittorent running on my main PC which is fine for the most part, but in summer my office gets hot as hell, which sucks when I work from home.

Been thinking of getting something different that can do all that and attach my storage to it, and set it up in the basement so I don't leave my PC running at all times.

Any recommendations?
We're gonna need a bit more knowledge about what you're in the market for.

Are you looking for something plug-and-play that'll "just work", but won't give you the best experience?
Are you looking to build your own?
Does data integrity and availability matter to you? Ie. do you want something where you'll have no questions about if if there's data corruption and how important is it that if you lose a disk, that the data doesn't disappear?
And perhaps most important: How're you going to be backing this up? I'm assuming you've got an existing backup strategy, which this needs to be able to integrate into.

Medullah
Aug 14, 2003

FEAR MY SHARK ROCKET IT REALLY SUCKS AND BLOWS

BlankSystemDaemon posted:

We're gonna need a bit more knowledge about what you're in the market for.

Are you looking for something plug-and-play that'll "just work", but won't give you the best experience?
Are you looking to build your own?
Does data integrity and availability matter to you? Ie. do you want something where you'll have no questions about if if there's data corruption and how important is it that if you lose a disk, that the data doesn't disappear?
And perhaps most important: How're you going to be backing this up? I'm assuming you've got an existing backup strategy, which this needs to be able to integrate into.

I'm not afraid to do a little work to configure, but am open to options as I'm not really looking to do anything too crazy or out of the box with it.

Right now I have all my data and media on separate drives with FreeFileSync mirroring everything to two equal sized external hard drives, and my actual non media data is on OneDrive and Carbonite. I lost a ton of pictures years ago and got kind of paranoid about losing data again.

Edit - To clarify, this will be for media mostly. My main data will stay on the main PC I think.

BlankSystemDaemon
Mar 13, 2009



Medullah posted:

I'm not afraid to do a little work to configure, but am open to options as I'm not really looking to do anything too crazy or out of the box with it.

Right now I have all my data and media on separate drives with FreeFileSync mirroring everything to two equal sized external hard drives, and my actual non media data is on OneDrive and Carbonite. I lost a ton of pictures years ago and got kind of paranoid about losing data again.

Edit - To clarify, this will be for media mostly. My main data will stay on the main PC I think.
There's either a run-of-the-mill Synology or QNAP, which are about as basic as you can get and still do reasonably well at being a NAS.
They both do varying levels of RAID so they do have some level of data availability, but it should also be said that they're using BTRFS which unfortunately has quite the history (the RAID5/6 implementation was found to be corrupting data, and it's never truly been fixed, as far as I can find out/remember), so might require a bit of care and feeding in that respect.

Then there's the QNAP TS-h973AX which is like the other ready-made boxes from Synology and QNAP, but is using ZFS which is barring none the best filesystem ever invented when it comes to data integrity - to the point that unless you're trying to gently caress it up, it's very very hard to lose data without suffering catastrophic hardware loss at the same time.

Finally, there's an almost endless variability in the DIY market, where you can go from: cases that're almost as small as the second QNAP system I mentioned but can fit 4 to 8 drives, and all the way up to rack-full-of-disks.

The single most important factor for whether you're paranoid enough about your data is your backup strategy.
You should have three copies of your data; one on-site and on-line, one on-site and off-line, and one off-site and on-line or off-line (depending on use-case).
Next step is making sure your backup frequency and testability is up-to-par; ie. how much data will you lose if you suffer catastrophic data loss, and are you checking that you can restore from your backups?

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Medullah posted:

Been a while since I played with a NAS. I currently have a FTP server, Plex server and Qbittorent running on my main PC which is fine for the most part, but in summer my office gets hot as hell, which sucks when I work from home.

Been thinking of getting something different that can do all that and attach my storage to it, and set it up in the basement so I don't leave my PC running at all times.

Any recommendations?
I don't have any experience with NAS appliances, so all of this is about building your own:

NAS and torrents are both pretty lightweight in terms of CPU load. I run both on a quad-core 2010 Xeon (L3426) in a Supermicro X8SIL-F and it basically just idles all day with a 6x10TB RAID-Z2. Power consumption is mostly from the hard drives, with the rest of the system drawing maybe 20W at idle. It has ECC as a bonus - I don't know if that has ever made any difference for me in practice, but it's a nice thought and would be expensive to do in a brand new machine. This has been solid for almost 4 years, first on CentOS 7 and then on Rocky Linux.

I originally ran Plex off the same box but CPU decoding is pretty rough on such an old chip so about 5 months ago I ended up moving to a $110 HP S01 from eBay with an i5-10400 and 16GB of memory swapped in. Plex says it's using QuickSync and from CPU load I'd estimate it can handle at least 6-7 FHD transcodes at once, vs. ~1 when I was running Plex on the NAS. Idle power consumption on this is almost nothing, around 7-10W.

If I were to do both in one machine today, I'd personally be inclined to build something on Alder Lake since its performance is so good and pricing is fairly aggressive. Avoid the -F models because those lack the IGP and therefore QuickSync. It's nice if you can get enough native SATA ports for the drives you want, but if not 8-port SAS adapters are readily available and that's what I use. The biggest challenge might honestly be finding an appropriate case for a reasonable price depending on how many 3.5" drive bays you want.

Eletriarnation fucked around with this message at 12:40 on May 14, 2022

BlankSystemDaemon
Mar 13, 2009



For the record, I fully support going the DIY route.

It's much more fun, and there's severe risk of learning something.
Plus, you might end up being the #1 poster ITT, like I did.

Medullah
Aug 14, 2003

FEAR MY SHARK ROCKET IT REALLY SUCKS AND BLOWS

BlankSystemDaemon posted:

For the record, I fully support going the DIY route.

It's much more fun, and there's severe risk of learning something.
Plus, you might end up being the #1 poster ITT, like I did.

I do have a couple Raspberry Pi 4s laying around, I suspect I'd still need my PC for heavy Plex transcoding but I bet I could set up the file server pretty nicely with that

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
On serverbuilds there's a fresh link to a Chenbro NAS chassis that could be the start of a new build.

https://forums.serverbuilds.net/t/chenbro-sr301-4-bay-mini-itx-nas-seller-accepts-90-offers-free-shipping/12084

I'd jump on it but I just built a TrueNAS box from one of those cheap HP desktops with an 8-bay DAS for storage.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Medullah posted:

I do have a couple Raspberry Pi 4s laying around, I suspect I'd still need my PC for heavy Plex transcoding but I bet I could set up the file server pretty nicely with that

The Pi 4's CPU should be more than adequate, but are you already using USB drives? I'm not sure if you'd notice a performance difference vs. SAS/SATA but if nothing else I'd feel better about a setup that has one big power supply for fewer points of failure.

Medullah
Aug 14, 2003

FEAR MY SHARK ROCKET IT REALLY SUCKS AND BLOWS

Eletriarnation posted:

The Pi 4's CPU should be more than adequate, but are you already using USB drives? I'm not sure if you'd notice a performance difference vs. SAS/SATA but if nothing else I'd feel better about a setup that has one big power supply for fewer points of failure.

Yeah good point, the current setup is two internal drives that I use primarily with two USB drives mirroring them.

susan b buffering
Nov 14, 2016

One of the drives in my DS218 died last week. Is there any harm in getting a larger drive to replace the lost one with an eye towards replacing the other one with the same size in a few months?

e: It's a mirrored drive pool or w/e it's called.

Klyith
Aug 3, 2007

GBS Pledge Week

susan b buffering posted:

One of the drives in my DS218 died last week. Is there any harm in getting a larger drive to replace the lost one with an eye towards replacing the other one with the same size in a few months?

e: It's a mirrored drive pool or w/e it's called.

Nope! https://kb.synology.com/en-uk/DSM/help/DSM/StorageManager/storage_pool_expand_replace_disk

susan b buffering
Nov 14, 2016


Thank you!

Generic Monk
Oct 31, 2011

absolutely gently caress western digital so hard. i bought (CMR, 6TB) red drives a couple of years ago, truenas advised that one of them was throwing smart errors so I returned it under warranty, it took them months to ship a replacement but it finally arrived last week. started the resilver and everything was going ok until not long after the speed dropped to about 20MB/s and kept dropping from there. turns out they shipped an SMR drive as the replacement. gently caress offffffff

i've raised a support ticket saying that this isn't an equivalent product and to issue one that is; what are the odds that goes anywhere? i realised I was getting ripped off buying reds shortly after I bought these anyway so I won't be buying any more regardless. what's the best bang for the buck I can get in the UK in terms of drive shucking? i really don't want to spend a ton of money but if the performance is this compromised with the replacement drive then i'll happily spend a little money to sort it

Wibla
Feb 16, 2011

Last I checked in Norway, it was cheaper (per TB) to just buy WD HC550 18TB drives than buying some external drive. YMMV though.

How many drives do you have in that array?

Generic Monk
Oct 31, 2011

Wibla posted:

Last I checked in Norway, it was cheaper (per TB) to just buy WD HC550 18TB drives than buying some external drive. YMMV though.

How many drives do you have in that array?

I think the maths is slightly more favourable when you're looking at 12tb and possibly below; amazon has a 12tb elements external for 180GBP which I... think is good? I've been out of the game for a while on hard drives so I have no idea on pricing really. I really don't want to spend that much though. I guess I could look for replacement 6TB disks on ebay lmao.

This is 3 disks in a RAIDZ1 (knowing what I know now I would probably do 2 mirrored vdevs of 2 disks each, but honestly I set this up in like 2015 and it's been mostly rock-solid since then and, beyond doubling the capacity of the disks since then, I haven't been arsed to touch it beyond basic maintenance :effort: )

I assume once the SMR disk finishes resilvering and comes online it's just going to tank the performance of the whole pool? Given the resilver has taken over a week I can only assume it will. Guess I'll see what WD come back with before doing anything.

Thanks Ants
May 21, 2004

#essereFerrari


I would have thought WD will swap it for a non-SMR disk, you bought a Red for a NAS so they need to give you something that will work in a NAS.

Klyith
Aug 3, 2007

GBS Pledge Week
If you bought it in like 2019 or earlier as an off the shelf Red NAS drive you have a really good case. Not only should you write back to WD to complain, but also CC some tech sites into the email (servethehome and others that were prominent in reporting the Red SMR shitshow in 2020).

If you got the drive in the 2nd half of 2020, or if this was a shucked drive because you were reading internet for posts like "buy this model of external, it has a 40EFAZ inside that's a whitelabel Red NAS", you don't have a case and will have to suck it up.


Thanks Ants posted:

you bought a Red for a NAS so they need to give you something that will work in a NAS.

WD thinks SMR drives work in a NAS!


(TBF a SMR drive is ok in some NAS applications: in a consumer 2-drive box it's no worse than SMR is normally. It's death for ZFS* though.)

*unless the drive is doing enterprise host-managed SMR or whatever

Generic Monk
Oct 31, 2011

Thanks Ants posted:

I would have thought WD will swap it for a non-SMR disk, you bought a Red for a NAS so they need to give you something that will work in a NAS.

Hopefully! I did send them the amazon invoice as proof of purchase months ago, and the amazon page for the 6TB disk now refers to the SMR version (I'm not sure if they actually produce 6TB reds that aren't SMR anymore?). But still, the model number is clearly printed on the drive label and the performance characteristics of these things are well understood. Maybe a weird oversight, maybe a case where they've looked at the numbers and statistically most people won't notice?

Honestly this is a product where the audience is exclusively the nerdiest of computer nerds; it's bizarre to me that the whole lower end of that lineup is kind of hosed for its intended use case. Just rent-seeking all the way down I suppose.

Is there a mom-and-pop, farmer's market hard drive company that I've somehow missed? HGST is now just western digital and I've avoided Seagate ever since I owned their 1.5TB drives that randomly dropped out of RAID for no reason (granted my being an idiot running them in RAID0 off the motherboard RAID controller was kind of asking for it)

Generic Monk fucked around with this message at 19:08 on May 17, 2022

Thanks Ants
May 21, 2004

#essereFerrari


I would assume that spinning discs are going to become something that ends up being OEM-only products or sold direct to cloud providers. Or sold with some sort of rebranding agreement to become like the Synology NAS drives.

Motronic
Nov 6, 2009

Generic Monk posted:

(I'm not sure if they actually produce 6TB reds that aren't SMR anymore?)

Red is (5400? Maybe the weird 5200/5300 still) SMR, Red Plus is 5400 CHR, Red Pro is 7200 CMR.

Generic Monk
Oct 31, 2011

Klyith posted:

If you bought it in like 2019 or earlier as an off the shelf Red NAS drive you have a really good case. Not only should you write back to WD to complain, but also CC some tech sites into the email (servethehome and others that were prominent in reporting the Red SMR shitshow in 2020).

If you got the drive in the 2nd half of 2020, or if this was a shucked drive because you were reading internet for posts like "buy this model of external, it has a 40EFAZ inside that's a whitelabel Red NAS", you don't have a case and will have to suck it up.

I think jan 2020, and yeh it was a legit red drive from amazon. i bought it for the warranty! i think the manufacture date was sometime in 2017 though, so presumably they were selling through their remaining inventory of the good ones before switching over.

BlankSystemDaemon
Mar 13, 2009



Generic Monk posted:

I assume once the SMR disk finishes resilvering and comes online it's just going to tank the performance of the whole pool? Given the resilver has taken over a week I can only assume it will. Guess I'll see what WD come back with before doing anything.
That's a big assumption; nothing about what I've seen leads me to indicate that it'll ever finish.

Thanks Ants posted:

I would have thought WD will swap it for a non-SMR disk, you bought a Red for a NAS so they need to give you something that will work in a NAS.
They're gonna cheapen out to the best of their ability, especially if it's been open for months - but in theory (at least for EMEA regions) they're not allowed to replace a product with something that's worse, only equivalent or better.
There's no statute of limitations on RMA cases expiring, so it can be prudent to take a leaf out of datacenter procurement policies and always have at least one good drive lying ready for replacement.

Thanks Ants posted:

I would assume that spinning discs are going to become something that ends up being OEM-only products or sold direct to cloud providers. Or sold with some sort of rebranding agreement to become like the Synology NAS drives.
We've been heading that way for years, with maybe the majority of people and companies either getting non-volatile flash storage and/or moving all their bulk storage on the butt.

Generic Monk
Oct 31, 2011

BlankSystemDaemon posted:

That's a big assumption; nothing about what I've seen leads me to indicate that it'll ever finish.

yeah i pulled it, it got to 95%ish by which point it was going at about 4MB/s with no estimated completion time. gently caress that

on the plus side WD did email me today saying they would replace it with a CMR drive, so yay! i just hope it doesn’t take actual months like this one did. the support ticket got escalated pretty much immediately which i guess is the result of all the media attention this got in 2020

i only discovered and emailed them about the issue yesterday so can’t fault the turnaround in that regard. it was the actual shipping of the that took months; i think i shipped the original drive to them in early march? i can only assume that’s covid fallout or cost-cutting since it didn’t take nearly that long when i did that a few years ago.

Generic Monk fucked around with this message at 14:37 on May 18, 2022

IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:

There's no statute of limitations on RMA cases expiring, so it can be prudent to take a leaf out of datacenter procurement policies and always have at least one good drive lying ready for replacement.

I'm starting to lean this way myself, especially if a company is going to gently caress around with what you get on the warranty anyway. I do have the luxury of an assload of SAS drive bays so I'm leaning more towards the constant supply of $100 10TB SAS drives on eBay and just buying N+1 or N+2 with the savings over buying an EasyStore I'm just going to shuck.

Wibla
Feb 16, 2011

My new (to me) fileserver/vm host turned up today!






Going to do some transcode testing with plex in a test VM using the Quadro P400 and test some other stuff before I actually move the drives + HBA(s) over, but it looks promising for now.

Motronic
Nov 6, 2009

Wibla posted:

Going to do some transcode testing with plex in a test VM using the Quadro P400 and test some other stuff before I actually move the drives + HBA(s) over, but it looks promising for now.

What is it that you need for Plex transcoding on a GPU to work? I thought I looked into this before and my 2670s did not support it. I'd love to be wrong. (yours are marked at "v3" and mine are "0", so that might be the difference)

Klyith
Aug 3, 2007

GBS Pledge Week

Motronic posted:

What is it that you need for Plex transcoding on a GPU to work? I thought I looked into this before and my 2670s did not support it. I'd love to be wrong. (yours are marked at "v3" and mine are "0", so that might be the difference)

Do you *have* a GPU? The xeon 2670 has no GPU, so is irrelevant to hardware transcoding.

Motronic
Nov 6, 2009

Klyith posted:

Do you *have* a GPU? The xeon 2670 has no GPU, so is irrelevant to hardware transcoding.

There are no GPUs, I thought I remembered that there had to be some sort of CPU support for using a separate GPU in a card slot that these didn't have. Maybe I'm totally misremembering this and I was just plain wrong and I should go ebay a card right now.

Mr. Crow
May 22, 2008

Snap City mayor for life
You can definitely use a gpu to transcode if your paying for it in plex (:lol:)

https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/

Alternatively jellyfin is completely free and open source and does it https://jellyfin.org/docs/general/administration/hardware-acceleration.html

Also its faster and snappier with large libraries.

Neither will work in a container without jumping through hoops.

Motronic
Nov 6, 2009

Mr. Crow posted:

You can definitely use a gpu to transcode if your paying for it in plex (:lol:)

https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/

Yeah, that's the page I remember. And it seems to indicate you need something with Intel QuickSync. Which these procs do not have they last time I checked on the Intel page they link to.

Edit:

Wibla posted:

You need quicksync (on an intel igpu) or a supported nVidia GPU.

OR is the part I was missing. Thank you.

Wibla
Feb 16, 2011

Motronic posted:

What is it that you need for Plex transcoding on a GPU to work? I thought I looked into this before and my 2670s did not support it. I'd love to be wrong. (yours are marked at "v3" and mine are "0", so that might be the difference)

You need quicksync (on an intel igpu) or a supported nVidia GPU.

Mr. Crow posted:

You can definitely use a gpu to transcode if your paying for it in plex (:lol:)

https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/

Alternatively jellyfin is completely free and open source and does it https://jellyfin.org/docs/general/administration/hardware-acceleration.html

Also its faster and snappier with large libraries.

Neither will work in a container without jumping through hoops.

I paid €74.99 for lifetime plex pass sometime in the dark ages, so that's covered :haw:

I'm not looking forward to messing with passthrough to make it work, but apparently it's not that painful with Quadro cards?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Anyone here happens to run a X570D4U with a Ryzen 5xxx, ideally the G one? What's your idle power draw?

Teabag Dome Scandal
Mar 19, 2002


I'm trying to install/reinstall Unifi Controller on my Unraid box after an update deleted the docker container. Now, whenever I try to install Unifi I get a port 1900 bind error. It looks like thats a DLNA port but that other docker containers don't have a problem binding to as well. Can I just change it to 1901 without breaking anything the controller relies on? Why is it bitching about it when Plex and Emby don't seem to care?

edit: i went ahead and changed it and got this error. 1901 is not currently in use by any other docker that I can see

quote:

docker: Error response from daemon: driver failed programming external connectivity on endpoint unifi-controller (4572ebe3d85217afd8422b6dcb13ec3ce22739cd4555f94eb4374203ac3fb37b): Error starting userland proxy: listen udp4 0.0.0.0:1901: bind: address already in use.

final edit: i just removed the port from the config and that seems to have made everyone happy. we'll see if that fucks anything up?

Teabag Dome Scandal fucked around with this message at 23:53 on May 18, 2022

Adbot
ADBOT LOVES YOU

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
unraid 6.10 is out and I’m resisting the urge to upgrade while I’m on the road because who knows what it’s going to break

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply