Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

BlankSystemDaemon posted:

EDIT: Just saw your edit; remember that ZFS is variable-sized, so a record doesn't have to be 1MB just because that's what the dataset is configured as - it depends on the dirty write buffer, if there's synchronous I/O involved, and a bunch of other factors.
Wait what? I thought it always adheres to record size except tail packing the end to the smallest record size possible? If it chops up records anyhow, what's the alphabet salad I need to feed to zdb as parameters to get a record size histogram of a single file?

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Wait what? I thought it always adheres to record size except tail packing the end to the smallest record size possible? If it chops up records anyhow, what's the alphabet salad I need to feed to zdb as parameters to get a record size histogram of a single file?
I think you misunderstood something, probably because I phrased it poorly - so just to make it clear:
A record that's recordsize'd bytes big doesn't have to get written to disk as a contiguous set of 512B or 4kB sectors, in order for it to appear as a single record to ZFS.
A record also doesn't have to be the recordsize bytes big in order to be written to disk, if for example the dirty write buffers timer reaches 5 seconds (by default) before the dirty write buffer contains at least the amount of data necessary to fill up recordsize bytes.

It's also worth noting here that draid is fixed-width, so all of this doesn't re ally apply there and I don't yet really know how draid works as I haven't looked into it enough.

disaster pastor
May 1, 2007


I have an unraid server that has been running terrifically for two years, but was running out of drive capacity. Got a new drive today, it's in and preclearing. Is there any value to trying to move things off other drives (the most packed drive is at 92% capacity), will unraid do that itself, or is it just something to not care about?

CopperHound
Feb 14, 2012

disaster pastor posted:

I have an unraid server that has been running terrifically for two years, but was running out of drive capacity. Got a new drive today, it's in and preclearing. Is there any value to trying to move things off other drives (the most packed drive is at 92% capacity), will unraid do that itself, or is it just something to not care about?
It doesn't move the stuff itself, but depending on your settings you might be fine leaving it.
If you have some drive or subfolder affinity set you should probably clear some space. For example, I had mine set up to keep each season of shows on the same drive so partial data loss would be easier to recover from.

kri kri
Jul 18, 2007

disaster pastor posted:

I have an unraid server that has been running terrifically for two years, but was running out of drive capacity. Got a new drive today, it's in and preclearing. Is there any value to trying to move things off other drives (the most packed drive is at 92% capacity), will unraid do that itself, or is it just something to not care about?

Not something to care about. Install the new drive, set the new drive and rebuild parity.

disaster pastor
May 1, 2007


Thank you both! It's up and running now, I went through my shares and checked the split levels/made sure they were set to use all drives. If the 92% keeps going up I'll look into moving some stuff, otherwise I'll leave things as they are.

Duck and Cover
Apr 6, 2007

https://eshop.macsales.com/item/Western%20Digital/0F38459/?utm_source=affiliate&utm_campaign=cj&cjevent=722bcd61674811ec822103350a82b824

Why are these so cheap?

redeyes
Sep 14, 2002

by Fluffdaddy

Not super cheap but i'd be scared of no Waranty.

Duck and Cover
Apr 6, 2007

redeyes posted:

Not super cheap but i'd be scared of no Waranty.

It's cheap when you consider it isn't a WD 18tb external. I know the externals have been at $300, just yesterday in fact. From what I can tell Mac Sales/OWC claims a limited 5 year warranty but I also don't see them or OWC on Western Digital's site so :shrug:

Duck and Cover fucked around with this message at 20:49 on Dec 27, 2021

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

redeyes posted:

Not super cheap but i'd be scared of no Waranty.

This was my thought as well. My guess is that they're old but (hopefully) unused drives, the sale price is pretty consistent with current 18tb drives so that crossed out number is probably MSRP when they were new which was probably 5+ years ago. Looking at reviews for these on other sites it seems like you can actually get WD to register the warranty on them but you have to contact support to do it and they'll hassle you about it, if you just try to register it with the S/N through their site it'll deny you.

It's pretty common to find old enterprise drives for good prices that say they're new and even look like it out of the box but then you run a diag and it pulls hundreds of thousands of hours or run time. It's common enough that most of the resellers have a standard procedure of "If customer is smart enough to find out it's not new, immediately send them an actual new one, we'll still make good profit from all the ones that don't figure it out".

Basically, it's a roll of the dice if you'll a) get an actually unused drive, or b) be able to get the warranty on it.

BlackMK4
Aug 23, 2006

wat.
Megamarm
Seeing as how X470D4U are unobtainum right now, does anyone have recommendations on a matx board/cpu combo that could be gotten sub-$600 USD with out of band management and a reasonably recent chipset?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Oooh, the X470D4U seems like a nice board. Noted.

This however is a sketchy quote: ECC support is only with Processors with PRO technologies.

Sir Bobert Fishbone
Jan 16, 2006

Beebort

Combat Pretzel posted:

Oooh, the X470D4U seems like a nice board. Noted.

This however is a sketchy quote: ECC support is only with Processors with PRO technologies.

Guess who went through 3 separate orders of ECC RAM over several months before reading that? :smith:

BlackMK4
Aug 23, 2006

wat.
Megamarm
Oh, well, just realized that there is a X570D4U and it is on Amazon / NewEgg right now.

Actuarial Fables
Jul 29, 2014

Taco Defender

Combat Pretzel posted:

Oooh, the X470D4U seems like a nice board. Noted.

This however is a sketchy quote: ECC support is only with Processors with PRO technologies.

The full quote is

quote:

*For AMD Ryzen Desktop Processors with Radeon Graphics, ECC support is only with Processors with PRO technologies.

That's referring to the APUs, not the standard Ryzen processors.

e. I have a x470d4u with a 2700x and multi-bit ECC is enabled.

e2. Whoh, what happened to the x470d4u supply? I picked one up earlier in the year and there were a bunch from $150-250 on ebay, but now it's just a few for $600+ shipped international.

Actuarial Fables fucked around with this message at 00:01 on Dec 28, 2021

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Over here in Europe, there's quite a few immediately available, for 250€. Hope they'll still be available coming Fall for reasonable prices.

BlackMK4
Aug 23, 2006

wat.
Megamarm

Actuarial Fables posted:

e2. Whoh, what happened to the x470d4u supply? I picked one up earlier in the year and there were a bunch from $150-250 on ebay, but now it's just a few for $600+ shipped international.

Haha, yeah, that is what I was seeing when I was originally looking a few months ago. Finally got around to starting on this project and it was a straight :wtf: at current prices.

Crunchy Black
Oct 24, 2017

by Athanatos
I've got another weird permissions issue with freenas that I think has to do with the update to 12. This happens when I try to install any plugin.


Googling seems to indicate its a problem with permissions in the jail but trying to remove "iocage" remotely results in a locked file error even though there are no jails running. Do I just need to remove it in shell via the interface and let it recreate?

CoasterMaster
Aug 13, 2003

The Emperor of the Rides


Nap Ghost
Right now, I've got an old NUC running Ubuntu Server and a Drobo 5N for storage. The NUC is running a bunch of services (Plex, NextCloud, ZNC, etc) using a giant docker compose stack and the Drobo is mounted via Samba. There are two samba shares: one for media that's guest-accessible and one that's private for my NextCloud files.

I want to replace it with a custom build, but I'm not sure what direction I want to take for software. I'm reasonably sure I want to set up a ZFS pool, but beyond that I'm not sure. I plan on having an SSD for the OS drive and then a bunch of platter drives for the pool. The easiest thing would be to just install Ubuntu Server again and basically do the same thing I've been doing, but with local storage instead of over a network.

I also stumbled upon Proxmox and that seems pretty interesting. I know I can set up a ZFS pool there and virtualize a couple of Ubuntu boxes. Is this worth it? I guess I can see the benefit for having things like NextCloud on its own instance and Plex on its own (with iGPU passthrough?), NZBGet and friends on one and ZNC + the lounge on another...but this also feels like overkill. Neat, but overkill.

For Proxmox, it seems like the way to share a ZFS pool to VMs is something like installing Alpine Linux in an LXC container and then mount it in the VMs? I stumbled upon https://www.reddit.com/r/Proxmox/comments/mco03f/smb_cifs_share_provided_by_proxmox/ and it seems like there are a lot of ways to go.

Is there some advantage to Proxmox I'm missing? It seems like for my case, it doesn't really give an advantage, but maybe I'm missing something.

CopperHound
Feb 14, 2012

Is there a certain reason you are thinking about moving to vms from containers? It seems like a lot of overhead.

CoasterMaster
Aug 13, 2003

The Emperor of the Rides


Nap Ghost

CopperHound posted:

Is there a certain reason you are thinking about moving to vms from containers? It seems like a lot of overhead.

Not particularly. I was just doing some reading in preparation for a new build and a lot of people were raving about Proxmox (even if all you do is virtualize Ubuntu with a bunch of containers ¯\_(ツ)_/¯ ), so I figured I’d give it a look. But as you said, seems like a lot of overhead for not much gain (at least in my case)

IOwnCalculus
Apr 2, 2003





CopperHound posted:

Is there a certain reason you are thinking about moving to vms from containers? It seems like a lot of overhead.

It's work, yes, but it's worth it. You get the benefit of completely segregating your apps from one another so that one install going fucky for some reason doesn't nuke anything other than itself, without the massive increase in RAM needed to run an environment like that with one VM per app instead of a container per app.

Also if you put the work in up front and set it up using docker-compose or something similar, you can update the whole stack with just two commands.

History Comes Inside!
Nov 20, 2004




I’m tired of upgrading my single drive NAS solution every few years and I want to buy something with a few spare bays (I think 4 bays would be ideal) I can just slap extra/bigger drives into, while presenting as a single storage volume to keep things as simple as possible for managing the space.

I also don’t give a poo poo about redundancy because nothing I have is so important that I couldn’t bear to lose it, but I would still like it if one dead drive only lost me whatever was on that disc rather than take out my entire collection.

Is this a setup I can achieve in something from Synology or QNAP? I’d rather have a little off the shelf box than have to build a small PC to run something else if I can help it.

For a little more less-relevant info, processing power is mostly irrelevant because I’m not gonna be transcoding anything, it literally just needs to be a box to store files on that can be accessed over my local network. The heaviest lifting it might do is if I try to get fancy and move over my *arr and nzbget setups to have everything on one local device, but that’s not essential and would just be a nice to have if I can be bothered to do the necessary computer touching.

CopperHound
Feb 14, 2012

History Comes Inside! posted:

I also don’t give a poo poo about redundancy because nothing I have is so important that I couldn’t bear to lose it, but I would still like it if one dead drive only lost me whatever was on that disc rather than take out my entire collection.

Is this a setup I can achieve in something from Synology or QNAP? I’d rather have a little off the shelf box than have to build a small PC to run something else if I can help it.
Sounds like you want unRAID. Can it just be installed in a qnap system?

BlankSystemDaemon
Mar 13, 2009



IOwnCalculus posted:

It's work, yes, but it's worth it. You get the benefit of completely segregating your apps from one another so that one install going fucky for some reason doesn't nuke anything other than itself, without the massive increase in RAM needed to run an environment like that with one VM per app instead of a container per app.

Also if you put the work in up front and set it up using docker-compose or something similar, you can update the whole stack with just two commands.
Hypervisors, especially ones that aren't legacy-free like bhyve, have had a lot more escapes than FreeBSD jails.

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

IOwnCalculus posted:


Also if you put the work in up front and set it up using docker-compose or something similar, you can update the whole stack with just two commands.

I have two shell scripts I use to do this with jails, it's a lot nicer than running it all manually for sure.

kri kri
Jul 18, 2007

disaster pastor posted:

Thank you both! It's up and running now, I went through my shares and checked the split levels/made sure they were set to use all drives. If the 92% keeps going up I'll look into moving some stuff, otherwise I'll leave things as they are.

I was tempted in the same way when I moved to unraid to stress about fill levels. Honestly just ignore it, unraid is there to just take care of everything. I haven't worried or cared about how full my drives have gotten since then.

History Comes Inside!
Nov 20, 2004




CopperHound posted:

Sounds like you want unRAID. Can it just be installed in a qnap system?

I have no idea, but if it can then I guess that’s a third option?

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

CopperHound posted:

Sounds like you want unRAID. Can it just be installed in a qnap system?

Nope

YerDa Zabam
Aug 13, 2016



History Comes Inside! posted:



Is this a setup I can achieve in something from Synology or QNAP? I’d rather have a little off the shelf box than have to build a small PC to run something else if I can help it.


Sounds like either of those would suit you best. They are as 'turnkey' as it comes, just get one with 4 bays, fill em up and forget it. If/when you do want to set up thr apps and stuff they're more than capable.
Definitely the least amount of computer touching of the options.

YerDa Zabam fucked around with this message at 15:29 on Dec 28, 2021

Chris Knight
Jun 5, 2002

me @ ur posts


Fun Shoe

History Comes Inside! posted:

I’m tired of upgrading my single drive NAS solution every few years and I want to buy something with a few spare bays (I think 4 bays would be ideal) I can just slap extra/bigger drives into, while presenting as a single storage volume to keep things as simple as possible for managing the space.

I also don’t give a poo poo about redundancy because nothing I have is so important that I couldn’t bear to lose it, but I would still like it if one dead drive only lost me whatever was on that disc rather than take out my entire collection.

Is this a setup I can achieve in something from Synology or QNAP? I’d rather have a little off the shelf box than have to build a small PC to run something else if I can help it.
Yep. A Synology 4 bay loaded with 4 equal drives and their SHR raid gives you 3 drives worth of storage and 1 of parity. You can swap out 1 disk at a time without losing anything.

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

History Comes Inside! posted:

I’m tired of upgrading my single drive NAS solution every few years and I want to buy something with a few spare bays (I think 4 bays would be ideal) I can just slap extra/bigger drives into, while presenting as a single storage volume to keep things as simple as possible for managing the space.

I also don’t give a poo poo about redundancy because nothing I have is so important that I couldn’t bear to lose it, but I would still like it if one dead drive only lost me whatever was on that disc rather than take out my entire collection.

Is this a setup I can achieve in something from Synology or QNAP? I’d rather have a little off the shelf box than have to build a small PC to run something else if I can help it.


I don't think Synology or QNAP actually support this option. If you don't care about redundancy then the only options they provide are:

Individual volumes on each drive which limits the size of that volume to the size of the drive.
OR
A RAID-0 array that can present a single volume that is the combined size of all of your disks but losing one drive will result in all of the data in the entire volume being lost.

I think Unraid is the only option that will give you a similar turnkey software experience to Synology/QNAP and allow for the drive usage you want. However this will mean having to build your own box.

Edit: It looks like Unraid can be installed on a QNAP as long as you get one that uses an Intel processor so that would probably be your best bet.
https://forums.unraid.net/topic/98828-unraid-on-qnap-nas-my-experience-ts-853a/

Krailor fucked around with this message at 19:26 on Dec 29, 2021

AirRaid
Dec 21, 2004

Nose Manual + Super Sonic Spin Attack
Hey goons I am in need of a mass storage upgrade, since the 4TB external drive I have is getting full and is becoming questionably reliable.

I'd like to replace it with just a Big loving Internal Disk Drive(tm). An actual NAS is beyond my need or budget at the moment, the drive mostly just holds movies/shows and serves a local Plex server which a couple TVs in the house connect to. The system is always on, very rarely powered down.

When looking at large capacity (10TB or so seems to be about the price range I'm looking to spend - £250) drives, there's a lot of stuff advertised as "NAS" or "Enterprise" specifically. What is the actual difference with these, and would it be a Bad Idea(tm) to put one in my system for the use described above?

And I guess regardless of the answer to that question, any recommendations for an actual drive? Been ages since I bought a platter drive.

Enos Cabell
Nov 3, 2004


Best play would be to buy an external drive, and either use it as is or shuck the drive and stick that in your system.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
If you go for NAS drives anyway, check the spec sheets and look for the noise and power consumption values. At some point going up the capacities, you'll see a drop in noise and power. That's when they're filled with helium. I'd suggest to go with those.

I haven't bought a platter drive in ages either. I was actually fearing annoying spindle motor whine, however the helium seems to keep it in check. But head seeking, for the love of god... Considering the high amount of platters, the head assembly is proportionally heavy and therefore loud as gently caress. And the helium is supposedly dampening this noise, too, so god forbid an air filled one.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

AirRaid posted:

Hey goons I am in need of a mass storage upgrade, since the 4TB external drive I have is getting full and is becoming questionably reliable.

I'd like to replace it with just a Big loving Internal Disk Drive(tm). An actual NAS is beyond my need or budget at the moment, the drive mostly just holds movies/shows and serves a local Plex server which a couple TVs in the house connect to. The system is always on, very rarely powered down.

When looking at large capacity (10TB or so seems to be about the price range I'm looking to spend - £250) drives, there's a lot of stuff advertised as "NAS" or "Enterprise" specifically. What is the actual difference with these, and would it be a Bad Idea(tm) to put one in my system for the use described above?

And I guess regardless of the answer to that question, any recommendations for an actual drive? Been ages since I bought a platter drive.
Typically NAS drives like the WD Red and the Seagate IronWolf drives just mean that they're rated for constant/longer use than something like a WD Blue and also usually include a longer warranty. They'll still work just fine in any machine. Previously we could also say that buying a NAS drive would also guarentee that the drives would be CMR drives and not SMR drives (which, depending on use, can be super slow/bad as NAS drives) but both WD and Seagate have both been caught branding SMR drives as NAS drives. IIRC if you're looking for a huge fuckoff drive then you should be fine as all the SMR drives were in the 2-6TB size range.

Enterprise drives are similar but also tend to be rated for even longer operation and also tend to lack features to reduce noise or improve cooling since the assumption is that they'll be going into a data center where those things aren't really a concern. Buying these online can be a bit of a crap shoot since there are a ton of resellers that advertise them as new but they're actually used drives that have been cycled out of data centers. This doesn't necessarily mean that they're bad, just that they've had a lot of hours of hard use already and that there's likely no warranty on them from the manufacturer.

Personally I use Seagate IronWolfs but I have a couple of Hitachi Constellation Enterprise drives still kicking around in my system that have been working just fine for years despite coming to me used. I bought those because they were half the cost of the NAS drives and I didn't have the budget for new at the time, plus I knew my server was going to live in the basement where drive noise and temps wouldn't be a problem.

Most data hoarders tend to buy external drives and "shuck" them (remove them from their enclosures) because they're frequently the same NAS drives you would normally buy from WD or Seagate but tend to be cheaper. I'd also say don't bother with 7200+ rpm drives, 5400 tends to be fine for NAS use and they're usually a lot quieter.

Scruff McGruff fucked around with this message at 20:20 on Dec 29, 2021

BlankSystemDaemon
Mar 13, 2009



Scruff McGruff posted:

Typically NAS drives like the WD Red and the Seagate IronWolf drives just mean that they're rated for constant/longer use than something like a WD Blue and also usually include a longer warranty. They'll still work just fine in any machine. Previously we could also say that buying a NAS drive would also guarentee that the drives would be CMR drives and not SMR drives (which, depending on use, can be super slow/bad as NAS drives) but both WD and Seagate have both been caught branding SMR drives as NAS drives. IIRC if you're looking for a huge fuckoff drive then you should be fine as all the SMR drives were in the 2-6TB size range.

Enterprise drives are similar but also tend to be rated for even longer operation and also tend to lack features to reduce noise or improve cooling since the assumption is that they'll be going into a data center where those things aren't really a concern. Buying these online can be a bit of a crap shoot since there are a ton of resellers that advertise them as new but they're actually used drives that have been cycled out of data centers. This doesn't necessarily mean that they're bad, just that they've had a lot of hours of hard use already and that there's likely no warranty on them from the manufacturer.

Personally I use Seagate IronWolfs but I have a couple of Hitachi Constellation Enterprise drives still kicking around in my system that have been working just fine for years despite coming to me used. I bought those because they were half the cost of the NAS drives and I didn't have the budget for new at the time, plus I knew my server was going to live in the basement where drive noise and temps wouldn't be a problem.

Most data hoarders tend to buy external drives and "shuck" them (remove them from their enclosures) because they're frequently the same NAS drives you would normally buy from WD or Seagate but tend to be cheaper. I'd also say don't bother with 7200+ rpm drives, 5400 tends to be fine for NAS use and they're usually a lot quieter.
Nah, the biggest difference with Reds and other drives rated for NAS use, is that the firmware ships with TLER enabled, which is necessary if you're using any kind of RAID array and want errors to propagate up throughout the storage stack with any meaningful relationship to the I/Os being performed.
Unfortunately for us, what enabled means has changed - it used to be that you got to adjust it via wdtler.exe or camcontrol cmd or smartct sctercl depending on whether you're using SCSI or SATA disks on Windows, Linux or FreeBSD (the latter two are the only I've used, and I've only used them on FreeBSD). However nowadays, it usually means that the error recovery period, instead of being 30 seconds, is 7 seconds - which is still good enough for most software RAID implementations (well, good enough for ZFS, I don't truly know about the others but I think I would've heard about it if it wasn't), but you only truly get the option of setting it to 0 for hardware RAID arrays if you buy RAID enabled drives (although why you'd use hardware RAID is beyond me).

The rating is largely meaningless as it almost never bears any relationship with any independently-verifiable data, which is why any MTBDL calculator lets you set more realistic MTBF values as well as adjust the BER.

AirRaid
Dec 21, 2004

Nose Manual + Super Sonic Spin Attack
So to re-iteriate I am not buying for a NAS, this is going to sit internally in my desktop system. I guess noise will be an issue, although my case is one designed to be on the quieter side.

I've had a look and there is very little price difference in External drives vs bare drives, and VERY little choice in them so I'll probably just avoid that. Also isn't there some uncertainty with shucking where some of the enclosures/drives dont come with standard SATA connectors?

I've found a Toshiba MG08ACA14TE, which is 14TB, CMR, Helium Filled, on the quieter end of the range according to the spec sheet, within budget. Is there any problem anyone wants to bring up with Toshiba drives? The Seagate ones i've looked at all seem to be fairly loud, and WD seems to command a premium in price.

Also, all of these drives are 7200, the 5400RPM ones seem to be more expensive.

mom and dad fight a lot
Sep 21, 2006

If you count them all, this sentence has exactly seventy-two characters.
I hope it's okay to ask this. I was redirected here from the PC Building Thread. Basically:

- I'm wanting to throw 2 HDD's in my desktop PC for long-term storage
- HDD's would be in Raid1 (well...mirrored volumes in Win10) for when one of them fails.
- they wouldn't be used that often; surely read more often than written. Mainly just a place to dump photos/videos/documents/etc.
- chose HDDs because they're like, half the cost of SSDs

Problem is WD's most bang-for-buck HDD's are shingled magnetic recording. Will it make a significant difference if I stick with SMR vice CMR for this use case? Am I gonna regret it?

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Drive-managed SMR works for traditional filesystems like NTFS, so you should be fine.
No guarentees though.

Have you thought about shucking? You can often get drives for cheaper than the SMR drives.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply