Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IOwnCalculus
Apr 2, 2003





Oh hey, I just took a crash course on the 380G9 12/3-bay setup because I just bought and populated one earlier this year. Step one, if you want one set up like this try and find one with all the drive bays already, the 12-bay upgrade kit for a 4-bay 380G9 costs as much/more than the server sometimes. Dunno about the three bay kit since I didn't look too closely at those parts, but got lucky and found a server that already had it and decided to take advantage.

There's one HBA built into the motherboard, and then I would expect most (if not all?) 12-bay configurations to also have some form of mezzanine-style HBA mounted dead center in the server. H220, P440, P840, etc. The HP HBAs are slightly odd in that they don't play completely nice with smartctl - you have to pass '-d cciss,N' where N is a drive ID number. Fun fact, smartctl still makes you pass in /dev/sdX as well, but it doesn't actually check to see if "N" and "X" are actually the same device, so I can iterate through my drives with:

code:
smartctl -i -d cciss,0 /dev/sda
smartctl -i -d cciss,1 /dev/sda
...
smartctl -i -d cciss,13 /dev/sda
I have all but one drive populated out of the 12 in the front and 3 in the rear.

But with all that said, at least the controller that came in mine - the P840ar - can work in a JBOD/HBA-style mode. I don't have arrays or anything set on the drives, they just show up, and they were imported from an existing zpool on a prior server. You could plug the front drive enclosure into a regular LSI HBA, but if your server comes with a P840ar, you'll have some weird double-wide SAS connectors at the controller side, so you would have to replace the SAS cables as well. But yes, 12 front drive bays and rear 3 drive bays have separate SAS cables. The rear is a single cable no matter what so you'll connect all three drives in the back to whichever controller you choose, the front I think can be set up as two cables instead of just one.

I also have an LSI 2308 that is connected via external SAS to my modified Netapp DS4243. Those drives still behave/respond same as before.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



The Microsemi RAID controller that's part of HPE server, at least on the ones I've had access to, does have an option to use ssacli to switch the controller into HBA mode by doing the following:
ssacli cmd -q “controller slot=N modify hbamode=on forced"

This obviously assumes that you've got no existing RAID configuration on the drives, and after doing it you'll want to check with something like hd(1) whether the drives still remain completely empty; On most of the servers I've done it on, it worked fine - but at least one treated the drives as separate raid0s.

As always, the reason why ZFS wants the disks in IT-mode on a RAID controller is because ZFS needs complete control over the disk cache - because it relies on being able to flush to disk and trust that it has been flushed to disk when the disk says it has.
S.M.A.R.T is just a nice bonus, that can occasionally help with giving you advance warning of failures unless you have SAS drives which rely on DEFECTs - and it's not a reliable way of telling if the disk is truly in IT mode or not.

foutre
Sep 4, 2011

:toot: RIP ZEEZ :toot:
I want to set up a NAS, but am trying to figure out what to get. I kind of want to get something where I can swap parts out/fiddle with it - although I don't have a ton of experience building PCs in general so if the barrier to entry is high I'm up for Synology too.

Just looking through Craigslist, I've found two possible options. There's a small 8 bay case, the "U-NAS NSC-800 Server Chassis" for sale on Craigslist by itself for $100, or with a motherboard/PSU and 8gb RAM for $400 (the certs invalid on the U-NAS site so avoided linking it, but the listing had a link to an anandtech benchmark of a similar system vOv). Alternatively, there's an older Dell Poweredge that comes with everything but drives for $240.

I like that both can accommodate a ton of drives, but am not sure re: the pros and cons of one over the other. I do live an an apartment so smaller is generally better, but wasn't sure if it'd be substantially more difficult building in one vs the other. Would appreciate people's thoughts on if one or either seems like a good option, or similar options to look out for! Feels like there's just a ton of choices out there.

E: looking through the link on the poweredge that seems pretty complicated to set up, "no bios" and "never sold to consumers" is probably a little beyond my level

foutre fucked around with this message at 21:12 on Nov 4, 2022

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Honestly that U-NAS case looks pretty drat nice - FlexATX PSU, mini-ITX and 8 hotswap bays with 120mm fans? I'd totally get that. Motherboard, PSU, and 8GB of RAM for $300 more probably only makes sense if the motherboard is something special or there's also a processor included. What kind of total budget are you thinking about and do you care about ECC?

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

foutre posted:

I want to set up a NAS, but am trying to figure out what to get. I kind of want to get something where I can swap parts out/fiddle with it - although I don't have a ton of experience building PCs in general so if the barrier to entry is high I'm up for Synology too.

Just looking through Craigslist, I've found two possible options. There's a small 8 bay case, the "U-NAS NSC-800 Server Chassis" for sale on Craigslist by itself for $100, or with a motherboard/PSU and 8gb RAM for $400 (the certs invalid on the U-NAS site so avoided linking it, but the listing had a link to an anandtech benchmark of a similar system vOv). Alternatively, there's an older Dell Poweredge that comes with everything but drives for $240.

I like that both can accommodate a ton of drives, but am not sure re: the pros and cons of one over the other. I do live an an apartment so smaller is generally better, but wasn't sure if it'd be substantially more difficult building in one vs the other. Would appreciate people's thoughts on if one or either seems like a good option, or similar options to look out for! Feels like there's just a ton of choices out there.

E: looking through the link on the poweredge that seems pretty complicated to set up, "no bios" and "never sold to consumers" is probably a little beyond my level

I have looked at U-NAS cases quite a bit but the price was always too high. $100 seems like a good deal but I would be sure that you can get the right size motherboard/PSU as you mentioned.

I would be concerned that the Poweredge would be really loud.

If I were going to build from scratch right now I would probably get this case: https://www.newegg.com/jonsbo-nas-case-mini-itx/p/2AM-006A-00074?Item=9SIAY3SG2M7485. Off the shelf I would probably just buy a Synology though as the software is very good.

IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:

The Microsemi RAID controller that's part of HPE server, at least on the ones I've had access to, does have an option to use ssacli to switch the controller into HBA mode by doing the following:
ssacli cmd -q “controller slot=N modify hbamode=on forced"

I think I found something in the BIOS to change that as well, I had it reconfigured as an HBA before I even had an OS on it.

I get why they'd still recommend LSI controllers, and if you got your hands on a DL380G9 with no mezzanine card at all - knowing what I know now, I'd skip it and just figure out cabling for the LSI option instead. But it can work, and I really doubt you'll find a 12-bay DL380G9 without one of those cards.

Aware
Nov 18, 2003

Nystral posted:

So I’m debating moving from a HPE Microserver Gen 10 to a refurbished rack mount sever like a Dl380 g9 or 730xd. I’m looking at 12 drives for the storage and 2 or 3 rear mounted drives for OS disks.

Currently the micro server is running Ubuntu with a ZFS pool. But the new server is going to be using a built in raid card, and ZFS hates not having direct access to the drives.

So as I understand if I can run the built in cards in HBA mode. But the TrueNAS folk seem against that idea.

I can buy a third party LSI HBA card running in IT mode. It seems like I’d need a card or cards with 4 SAS connectors to meet the needs of the front back pane. I’d also need to figure out the OS drives - can I run them in a RAID 1 or whatever off the built in card while the front backplane is on LSI?

Or I abandon ZFS for a file system less interested in the RAID hardware. BUT I’ve been using ZFS for 10ish years across various NAS solutions so moving to say BTRFS is kind of scary. Also I’d rather something like a TrueNAS with a nice GUI for sharing vs samba config files and whatnot.

Am I missing anything? What would you do?

I've got an r740xd that ultimately I moved to my work office as a lab device because it was too noisy for home. Newer versions of Dell will not allow you to drop the idle fan speeds down low enough for home use. So just be aware of that. Conversely though I just installed about 20 loaded R450s and they are the quietest (at idle) 1RU server I've ever used.

I can't comment on your controller questions, I didn't bother doing anything past just the 1:1 virtual disk mapping for messing with TrueNAS on it. Today it's just an ESXi box with a RAID1 SSD and a single 8TB spinning disk for lab stuff.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I have the U-NAS NSC-810A (the version with mATX and 2x single slot PCIe cards) and the design is superb but the build quality is severely lacking imo (and this was pre-covid, and I don't think they've had a single shipment to north america since then). The rubberized faceplate is super duper soft and thin and will scratch even just handling it gently during a build (I strongly recommend using a towel and never putting it on a bare surface) and the disk tray LEDs started burning out within like, a year. So far only the blue power lights but I assume the amber activity lights will eventually go as well. The airflow is good (with the fans on noctua LNAs since I didn't want to bother with figuring out IPMI fan control) and you can feel air coming out the front, it's not super warm in my basement right now but it hasn't really cooled off either, and my hottest drives are 47C right now (which is reasonable).





Like all SFF builds it's challenging and you need to be very sure of what you're doing, and/or be willing to stop and order some parts. I ended up needing 24-pin ATX extension cables and 8-pin CPU aux cable extension cables (ymmv on your particular board, some of the ones but this had the typical "top edge" layout, with the 24-pin on the top right area, nothing unusual). I clipped off the extra random poo poo from the psu harness to keep cable clutter down and then shrinktubed the ends up nicely. Airspace in the PSU cable area is still very very tight even so, but, you don't have to take it apart every day either.

Using a USB-to-SATA adapter was a flop, the adapter died after a year or two and started causing problems with the OS (nothing wrong with the data on the array itself), so I eventually just booted from an NVMe. I replaced the fans with corsair ML120s during the initial build and tbh I don't know if that was a win, they started howling a bit when they first spin up after a year or so, it's fine when they're warm but they like to spin all the time. It's mostly not a problem and I just let them go, because you need the airflow over the drives in designs like this.

Note that you should be able to use short 1U-height NVMe adapters in your pcie slots if you want to put 3x NVMe in it plus the one on the mobo... but that blocks the physical slots. The upper one needs a riser but even the factory one is way too long, needs to be like 1 inch long between the connectors, the factory cable is so long it blocks the other pcie slot. I think you could also get an actual PCB-adapter-style "90 degree turn" thing and it would work for the lower slot, it doesn't need barely any height.

I like the "synology/qnap style" hotswap chassis aesthetic a lot, if you know of any other alternatives (with higher build quality perhaps?) let me know. The Silverstone DS380 isn't really the styling I'm looking for, and iirc has some build quirks as well. There just aren't many other alternatives for a basic hotswap chassis like a homebrew synology/qnap without just buying a synology/qnap.

Sliger Cerberus X can mount quite a few and it's small as well, don't remember how many drives though. It's not hotswap but w/e, doesn't matter at home. Or the Fractal Define 7 XL holds up to 18 3.5" drives in a full-tower layout, there's the smaller non-xL as well, and note that they also support EE-ATX for big whitebox supermicro boards (the standoff layout is extremely non-standard). There are some fairly straightforward ATX-ish designs that should be able to hold a lot of drives in that fashion, but I just don't know if anybody's ever done them.

Paul MaudDib fucked around with this message at 07:05 on Nov 5, 2022

foutre
Sep 4, 2011

:toot: RIP ZEEZ :toot:

Paul MaudDib posted:

I have the U-NAS NSC-810A (the version with mATX and 2x single slot PCIe cards) and the design is superb but the build quality is severely lacking imo (and this was pre-covid, and I don't think they've had a single shipment to north america since then).

Ah, thank you, this whole rundown is very helpful. I imagine the line between 'fun fiddling' and 'frustrating fiddling' probably varies person-to-person, and since I definitely don't have much experience troubleshooting these kind of problems, or doing things more difficult than slotting pretty standard parts into a big atx case that might end up being more on the frustrating side. I might end up passing on it, or at least getting an already set up package if I do.

Smashing Link posted:

I have looked at U-NAS cases quite a bit but the price was always too high. $100 seems like a good deal but I would be sure that you can get the right size motherboard/PSU as you mentioned.

I would be concerned that the Poweredge would be really loud.

If I were going to build from scratch right now I would probably get this case: https://www.newegg.com/jonsbo-nas-case-mini-itx/p/2AM-006A-00074?Item=9SIAY3SG2M7485. Off the shelf I would probably just buy a Synology though as the software is very good.

Ah cool, will skip the power edge for sure, and look at some Synology options as well.


Eletriarnation posted:

Honestly that U-NAS case looks pretty drat nice - FlexATX PSU, mini-ITX and 8 hotswap bays with 120mm fans? I'd totally get that. Motherboard, PSU, and 8GB of RAM for $300 more probably only makes sense if the motherboard is something special or there's also a processor included. What kind of total budget are you thinking about and do you care about ECC?

I'm shooting for ~$450 or less for the NAS itself, and just start out with 2 drives and build up from there. I don't think I really care about ECC, most of the stuff I'll be storing will either be backed up in the cloud/a couple other places, or just not /that/ important.

I dunno - do you tend to get more storage/$ going with something you make yourself vs something off-the-shelf? Realistically, maybe I'm overestimating how much data I'll end up wanting to store, and I'll never actually end up going past 4. In general, how different would the day-to-day performance/reliability be between one of the newer Synology NASs and the older ones? I.e., there's a 5 bay 'DS1513+' that already comes with 5 3TB HDD's for $450, which would be a good fit as long as it doesn't matter that it's 8 years old. e: I'm not going to be using it for media streaming or anything, so maybe the newer hardware doesn't matter as much?

Really appreciate all the advice! The number of different options is a bit overwhelming.

foutre fucked around with this message at 05:13 on Nov 5, 2022

Tesseraction
Apr 5, 2009

Thanks for the replies all.

BlankSystemDaemon posted:

Synology doesn't use ZFS (I believe it uses BTRFS on top of some LVM stacking that makes it hard to work with, even from a regular Linux distribution - but it's not my area of expertise)

Yes, correct, and their tech support is useless. All volumes simultaneously crashing and them waiting 4 days before bothering to log on to safely reboot the crashed volumes that didn't auto-recover. Then telling us that we need to run a system integrity check to be sure the data is fine but then saying that the time it would take for our storage could be between minutes and weeks and we'd have to be offline the entire time. Needless to say the price of every hour with the system offline means management are furious.

H2SO4 posted:

You have a manager in the clutches of a VAR that smells blood in the water.

Being fair there's a lot of blood.

Motronic
Nov 6, 2009

Tesseraction posted:

Thanks for the replies all.

Yes, correct, and their tech support is useless. All volumes simultaneously crashing and them waiting 4 days before bothering to log on to safely reboot the crashed volumes that didn't auto-recover. Then telling us that we need to run a system integrity check to be sure the data is fine but then saying that the time it would take for our storage could be between minutes and weeks and we'd have to be offline the entire time. Needless to say the price of every hour with the system offline means management are furious.

Being fair there's a lot of blood.

If your storage is that important to the business why are you relying on a single device? What is your backup plan? When is the last time you did a restore?

Sounds like fundamental planning failures. There is no magic device that will fix this regardless of what filesystem it hoses onto your drive pool.

I hope the takeaway is "we need a more robust storage infrastructure" rather than "surely X device will prevent a repeat of this."

Kivi
Aug 1, 2006
I care

Paul MaudDib posted:

I replaced the fans with corsair ML120s during the initial build and tbh I don't know if that was a win, they started howling a bit when they first spin up after a year or so, it's fine when they're warm but they like to spin all the time. It's mostly not a problem and I just let them go, because you need the airflow over the drives in designs like this.
If those are Corsair Maglevs, you kill them with those LNA adapters :ohdear:

Edit: Oh oops, should've read that post.

LNA adapters kill Maglevs, because they use 12 V to float the fan hubs (hence the maglev name) and if you lessen the running voltage, you cause the fans to run on the spin up bearings designed only to start / stop the fan which end up destroying them. If you don't mind start up / stop noises they'll be fine if you supply them with 12 V and use PWM to slow them down (as Corsair and Sunon intended)

Kivi fucked around with this message at 12:44 on Nov 6, 2022

Tesseraction
Apr 5, 2009

Motronic posted:

I hope the takeaway is "we need a more robust storage infrastructure" rather than "surely X device will prevent a repeat of this."

Absolutely.

Ironically I had just gotten the tape backup system up and running and had started covering the fundamental parts of the infrastructure backed up and was on the last of the three key areas when this crash occurred. Real moment of potentially snatching defeat from the jaws of victory.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Kivi posted:

If those are Corsair Maglevs, you kill them with those LNA adapters :ohdear:

Edit: Oh oops, should've read that post.

LNA adapters kill Maglevs, because they use 12 V to float the fan hubs (hence the maglev name) and if you lessen the running voltage, you cause the fans to run on the spin up bearings designed only to start / stop the fan which end up destroying them. If you don't mind start up / stop noises they'll be fine if you supply them with 12 V and use PWM to slow them down (as Corsair and Sunon intended)

haha oops I definitely killed them then

FeastForCows
Oct 18, 2011
I have a QNAP TS-453bmini and I plan to add a TR-004 to it to extend storage. My UPS has a function to tell the NAS to shut down after 5 minutes in case of a power outage. I cannot for the life of me find anything on whether this would also apply to the TR-004 if it's attached to the NAS. Does anyone here happen to know?

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

FeastForCows posted:

I have a QNAP TS-453bmini and I plan to add a TR-004 to it to extend storage. My UPS has a function to tell the NAS to shut down after 5 minutes in case of a power outage. I cannot for the life of me find anything on whether this would also apply to the TR-004 if it's attached to the NAS. Does anyone here happen to know?

It should work as you desire. I have one of those with my old TVs-471 and it would shut the TR-004 when the power went out and the UPS sent the signal to the NAS.

The TR-004 just shows up as a drive pool assuming you let the NAS manage the drives versus using JBOD. From there, QNAP just unloads the drive pool and the device goes into standby because it’s unused.

FeastForCows
Oct 18, 2011

rufius posted:

It should work as you desire. I have one of those with my old TVs-471 and it would shut the TR-004 when the power went out and the UPS sent the signal to the NAS.

The TR-004 just shows up as a drive pool assuming you let the NAS manage the drives versus using JBOD. From there, QNAP just unloads the drive pool and the device goes into standby because it’s unused.

I'm planning on letting the NAS manage it, yes. So the TR-004 being in standby won't do any damage if the power goes out (and my UPS battery is empty)?

FeastForCows fucked around with this message at 07:18 on Nov 10, 2022

Wibla
Feb 16, 2011

It's extremely unlikely that you will experience data loss as long as you're not actively writing data when the power disappears.

FeastForCows
Oct 18, 2011
Got it, thanks!

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I’m just updating the BIOS on my X570D4U via the BMC. I figured the dreadfully slow flashing itself was pretty terrible, but apparently I made the mistake of checking “Preserve BIOS settings”, which causes the system to sit at POST veeeererrrrrrry sloooooowly clearing and then importing the old settings.

:psypop:

Is it that terrible on proper enterprise boards, too?

Computer viking
May 30, 2011
Now with less breakage.

Combat Pretzel posted:

I’m just updating the BIOS on my X570D4U via the BMC. I figured the dreadfully slow flashing itself was pretty terrible, but apparently I made the mistake of checking “Preserve BIOS settings”, which causes the system to sit at POST veeeererrrrrrry sloooooowly clearing and then importing the old settings.

:psypop:

Is it that terrible on proper enterprise boards, too?

Everything is slow on server boards. Entering a menu? Powering on far enough to see if you've even connected the monitor properly? Rebooting after accidentally picking the wrong submenu in the confusingly labeled boot options? Preparing the boot device menu?

I watch long youtube videos on the side whenever I need to do anything on server hardware beyond "using an already booted OS".

Thanks Ants
May 21, 2004

#essereFerrari


When I used to have servers to look after it could take an hour to do firmware updates on things like Dell PSUs. You'd just have to sit there hoping it was going to come back because PSU firmware updates and flashing the iDRAC itself were the only things you couldn't monitor remotely.

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
It takes like 45 minutes for a quad cpu server to boot up

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Computer viking posted:

Everything is slow on server boards. Entering a menu? Powering on far enough to see if you've even connected the monitor properly? Rebooting after accidentally picking the wrong submenu in the confusingly labeled boot options? Preparing the boot device menu?

I watch long youtube videos on the side whenever I need to do anything on server hardware beyond "using an already booted OS".

yeah I was troubleshooting my x99 ws/ipmi and it takes like 2 minutes for the BMC to come up every time you do a cold boot, it'll hang during post, with the POST code display counting down through a range. Sometimes the BMC seems to come up faster and I haven't quite pinned down what the difference is, maybe static IP vs dynamic IP or something, but, there's a green light on the MOBO that always lights up just before it POSTs, it's BMC READY or something.

Tesseraction
Apr 5, 2009

A BMC should always be accessible long before the operating system is; its whole purpose is to provide information and control regardless of the state of the hardware below.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
I'm not sure if this is the right thread to ask or watch about this, but are there likely to be sales on good storage drives over the next few weeks? And if so, where would be the best place to find them? I'm totally ignorant about what the reputations of various drives are at the moment so I need some advice.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

K8.0 posted:

I'm not sure if this is the right thread to ask or watch about this, but are there likely to be sales on good storage drives over the next few weeks? And if so, where would be the best place to find them? I'm totally ignorant about what the reputations of various drives are at the moment so I need some advice.

SSDs (both SATA and NVMe) seem likely to have some sales. HDD prices basically haven't budged from when I built my server 4 years ago.

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.

K8.0 posted:

I'm not sure if this is the right thread to ask or watch about this, but are there likely to be sales on good storage drives over the next few weeks? And if so, where would be the best place to find them? I'm totally ignorant about what the reputations of various drives are at the moment so I need some advice.

Shucks.top and slickdeals.net great for monitoring deals. This time of year is generally the best for sales. For high capacity, $15/tb is good to shoot for. 14tb for $190 or so tend to appear in Nov/Dec too, which is near the best I've seen. You'll generally be buying externals and shucking, which is almost always cheaper than buying internal drives. All manufacturers are pretty much the same. Seagate had a bad reputation in the past, but they've been fine for awhile. 14 or 16 TB is generally the price/performance sweet spot right now but there have been good deals on 12 and 18 too.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Computer viking posted:

Everything is slow on server boards. Entering a menu? Powering on far enough to see if you've even connected the monitor properly? Rebooting after accidentally picking the wrong submenu in the confusingly labeled boot options? Preparing the boot device menu?
Is this a thing like with other appliances, where the minimally viable microprocessor gets chosen for the job, for absolutely no good reason, while a decent fast one is just 2-4 dollars more? Because I sure don't understand that you can make a (pre-supplychain issues times) cheap "desktop" computing platform a la Raspberry Pi with decent performance, and then do poo poo like this.

Computer viking
May 30, 2011
Now with less breakage.

Some of it is for arguably good reasons - there is just a lot more sensors and controllers to initialize and query. And some of it seems to just be a lack of motivation; after all, you buy Dell/HPE/etc servers because you negotiated a million dollar deal for a stack of them, not because they're nicer to work with for the engineer who touches them in the end.

Wizard of the Deep
Sep 25, 2005

Another productive workday
More than that, the population buying the biggest volume of servers doesn't value fast start-up time. The devices are designed to be powered up and functional for months or years at a time, and full power cycles are relatively rare. The resolution for long start-up times for those buyers isn't "overbuild the specific functions that will see use maybe once a month", it's "buy more servers and stagger the downtime".

Do you care how fast the oil in your car can be changed?

Ashex
Jun 25, 2007

These pipes are cleeeean!!!
Looking for some advice on how to proceed here. I have a file server with six 2TB WD reds in a raid six and on occasion a drive dies. I just got my second failure of the year which is a little unusual and am now I’m wondering if I should change things up.

I’m considering slowly phasing in SSDs but they’re not cheap or I chuck a 8TB drive in and move all my data over then raid 5 the rest to mirror it. Then when another fails I get one more to be a proper mirror and use the rest for raw storage or something.

Price is about the same either way or I just stay with the reds and replace business as usual.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Ashex posted:

Looking for some advice on how to proceed here. I have a file server with six 2TB WD reds in a raid six and on occasion a drive dies. I just got my second failure of the year which is a little unusual and am now I’m wondering if I should change things up.

I’m considering slowly phasing in SSDs but they’re not cheap or I chuck a 8TB drive in and move all my data over then raid 5 the rest to mirror it. Then when another fails I get one more to be a proper mirror and use the rest for raw storage or something.

Price is about the same either way or I just stay with the reds and replace business as usual.

How old are the 2TB drives? Might be able to replace them under warranty. If they are out of warranty, it's time to buy some new ones. I wouldn't bother with SSD unless you have some specific performance need or cash to burn. Do you have your data backed up?

Ashex
Jun 25, 2007

These pipes are cleeeean!!!

fletcher posted:

How old are the 2TB drives? Might be able to replace them under warranty. If they are out of warranty, it's time to buy some new ones. I wouldn't bother with SSD unless you have some specific performance need or cash to burn. Do you have your data backed up?

I’ll need to check the warranty on this one but they should all be within one so I’m doing the usual warranty process on each but I’m getting a bit annoyed at the failure rate. I backup to backblaze with duplicati so I’m not worried about data loss, this is just my usual “ugh now again” response.

I only consider SSD as the server is in a closet that has a small air gap at the top so temperature build up might be a problem.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
So apparently Kubernetes had announced deprecation of Dockershim, which allowed TrueNAS to use Docker as container engine. The result is apparently a hacky solution over ZFS using overlayfs, to make the newest release of k3s running, instead of native ZFS datasets and snapshots. (--edit: I guess overlayfs is desired, after further investigation.) On top of that, I expect native Docker support to be dropped out of the base image.

I do not like that one bit.

Combat Pretzel fucked around with this message at 12:59 on Nov 13, 2022

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Combat Pretzel posted:

So apparently Kubernetes had announced deprecation of Dockershim, which allowed TrueNAS to use Docker as container engine. The result is apparently a hacky solution over ZFS using overlayfs, to make the newest release of k3s running, instead of native ZFS datasets and snapshots. (--edit: I guess overlayfs is desired, after further investigation.) On top of that, I expect native Docker support to be dropped out of the base image.

I do not like that one bit.

1. Docker is the worst container runtime at this point, good riddance
2. I don't have any first-hand experience with using ZFS as storage backend but OverlayFS is what's generally being used all over the place. With the merging of this recent PR it should get less hacky
3. If TrueNAS SCALE lets you use Debian repos you can probably keep installing Docker and Podman via apt - or learn to use the container tooling that's actually supported on there

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice
I think the main argument for using TrueNAS Scale is Docker. Not sure why else one would.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?
Linux containers != Docker.

If you want to use Docker to orchestrate containers you could benefit from a different host OS choice than TrueNAS SCALE where it's unsupported.

Hughlander
May 11, 2005

FYI, 18TB Easystores are at an all time low right now.

Adbot
ADBOT LOVES YOU

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
Synology introduced the DS923+ and it too uses a Ryzen CPU, so much for Plex transcoding.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply