|
Supermicro boards come with this really cool tool-less plastic thingy that retains the m.2. I love that solution, but it of course doesn't work for gamer boards that have heatsinks you need to remove to get at the socket. Worst part about m.2 is the tiny tiny screw
|
# ? Nov 10, 2023 19:23 |
|
|
# ? May 28, 2024 14:42 |
|
Kung-Fu Jesus posted:Supermicro boards come with this really cool tool-less plastic thingy that retains the m.2. I love that solution, but it of course doesn't work for gamer boards that have heatsinks you need to remove to get at the socket. Worst part about m.2 is the tiny tiny screw 100% Agree, definitely the flaw in m.2 being really developed for use in laptops. It would be nice if U.2/EDSFF cabled enclosures within a case became a more widespread thing, with the bonus you could have external hotplug slots! Those supermicro retention gubbins are genius
|
# ? Nov 10, 2023 19:40 |
|
priznat posted:100% Agree, definitely the flaw in m.2 being really developed for use in laptops. It would be nice if U.2/EDSFF cabled enclosures within a case became a more widespread thing, with the bonus you could have external hotplug slots! Amen. We could get better airflow over these hot-rear end SSDs too.
|
# ? Nov 10, 2023 19:44 |
|
priznat posted:100% Agree, definitely the flaw in m.2 being really developed for use in laptops. It would be nice if U.2/EDSFF cabled enclosures within a case became a more widespread thing, with the bonus you could have external hotplug slots! I guess the fewer PCIe slots on today's motherboards (or them being covered up by the GPU) makes it less feasible to just grab a PCIe expansion card that has a couple m.2 slots on it.
|
# ? Nov 10, 2023 19:47 |
Thats exactly why U.2 got developed though isn't it? PCIe NVMe SFF front loading bays because the tech was too good to keep only in M.2, and its already down to like $0.10/GB.
|
|
# ? Nov 10, 2023 19:55 |
|
^^ Yeah a 2.5" option that can fit in existing SATA bays is great. Then U.3 as well which is a bit more confusing as it allows either SATA/SAS or NVMe to use the same connector (trimode connection) but those are rarer. EDSFF is even better with a few different form factor sizes and up to x16 supportKibner posted:I guess the fewer PCIe slots on today's motherboards (or them being covered up by the GPU) makes it less feasible to just grab a PCIe expansion card that has a couple m.2 slots on it. Yeah usually you just get 20 lanes from the CPU and that'd be 16 for the GPU slot + 4 for the M.2 (or sometimes 24 for another m.2). Additional slots would be splitting the x16 (which wouldn't really hurt the GPU access that much tbh) South bridge you could get more depending on chipset but that's usually just connected at x4 to the CPU so you're bottlenecked there. Need to get into HEDT or server cpus for the big wide PCIe lane availability! PCIe switch options are also too spendy for consumers as well. priznat fucked around with this message at 20:01 on Nov 10, 2023 |
# ? Nov 10, 2023 19:58 |
|
The real win of PCIe5 isn't going to be jumbo pipelines for GPU or single drives, it's going to be making x1 slots useful.
|
# ? Nov 10, 2023 20:32 |
|
big scary monsters posted:and maybe 20TB of research data that probably I will never need to look at again but if I do need it I will really need it. The research stuff needs to be accessed almost never It sounds like you have the unusual use case, where ZFS provides way too much active protection and too little protection against house fire. I would probably store a one copy on a single 20TB harddrive and create checksums of all the data, then check the data regularly. Another backup copy I would probably put in Amazon Glacier for emergency. I might want a third copy somewhere in the middle. Something with a quicker retrieval than Glacier and hopefully not much more expensive.
|
# ? Nov 10, 2023 21:02 |
|
big scary monsters posted:maybe 20TB of research data that probably I will never need to look at again but if I do need it I will really need it. I'm guessing that you have already crunched this down as much as it can be compressed with domain specific compression, or failing that zstd or xz? For this stuff you want both a copy online somewhere and multiple copies in different physical locations. If you can get it under 20TB after compression, it'll fit on a single hard disk and you could just put it on an external hard disk somewhere, or if you can borrow a tape drive you could put it on tape and stash those somewhere. For the cloud option, AWS will keep it forever completely safely but is more expensive. Maybe look at Backblaze B2 or Cloudflare R2? Backblaze has been around for ages, has mediocre performance, only has a single datacenter. Cloudflare is growing in this space and super aggressive trying to take market from AWS. 20 TB is a lot though, so even at Backblaze's $6/mo per TB that's $120/mo, or $1440/year. That buys a lot of external hard disks stashed in safes for data that you never access. https://www.backblaze.com/cloud-storage/pricing https://www.cloudflare.com/developer-platform/r2/ If you do choose to use AWS Glacier, realize they are holding your data under ransom and getting it back is going to cost you big-time.
|
# ? Nov 10, 2023 21:12 |
Yeah look at Amazon Glacier Deep Archive. It has like half a day retrieval times, but it's super super cheap. 20tb would only be ~$20 a month and you wouldn't need to worry about silent bitrot or something.
|
|
# ? Nov 10, 2023 21:13 |
|
We have our research data on LTO-8 tape for similar reason. It lives on a zpool on the file server, because that makes it easy to dig through on short notice, and on tape because we like having a separate "untouchable" copy, and the most crucial stuff gets a third copy on ne of the university or hospital secure computing facilities. Also, I think I finally fixed my weird SATA drive checksum errors. By moving everything to NVME. I'm sure there are cheap options, but I bought an Asus 4x NVME M.2 card (this one), four 4TB 990 Pro drives, and put it in Slot 1 on an AM4 ASRock motherboard. Somehow, surprisingly, it Just Works.
|
# ? Nov 10, 2023 22:25 |
|
Thanks for the suggestions, I also have a copy of that data on the university servers where people whose job it is to worry about such things are hopefully keeping it safe. Sounds like maybe just a couple of offline drives with copies of the data kept in different locations would be a better solution than keeping it on an online NAS.
|
# ? Nov 10, 2023 22:37 |
|
https://www.kickstarter.com/projects/icewhaletech/zimacube-personal-cloud-re-invented https://nascompares.com/2023/11/10/the-zimacube-nas-teardown-early-review/ https://www.youtube.com/watch?v=wRowwdfCJ3Y This looks like an interesting piece of hardware. A ~22cm cube with space for 6 HDDs and 4 M2 SSDs. Tamba fucked around with this message at 23:41 on Nov 10, 2023 |
# ? Nov 10, 2023 23:37 |
|
Looks nice, I'd hope you can easily nuke the included software if you want.
|
# ? Nov 11, 2023 00:07 |
|
Twerk from Home posted:SnapRAID is neat and very flexible, and you're very likely to be able to recover from the old 10TB disks dying. Just realize that it's a much simpler tool than anything else you're comparing it to, it won't tell you when a disk fails, you'll have to notice yourself, and that it doesn't recover automatically, you'll be running commands to recover. Thanks for the reply. Yes, that is a very good thought about not having sync scheduled that runs after a drive failure. I understand that in the case of disk failure I'll need to run the fix commands myself and mess with a couple of files after the disk is replaced. My thinking was that I'd go with OpenMediaVault with the SnapRAID plugin, and pool the drives with *Mergerfs plugin, and then have OMV handle my scheduled weekly SMARTs and SnapRAID's scrubs. Presumably in this case I'll be getting notifications of at least SMART issues and data integrity issues via scrub and so can turn off a scheduled sync and investigate. But I also don't mind just doing my syncs manually either and just syncing every time I get a notification that SMART and Scrub ran successfully without problem. Data on the drive actually changes very rarely, I'd say I'm only adding new files to the NAS about once or twice a month. *I'd probably use the default policy to allocate data based on space (so it'd fill up a portion of the newer 16tbs before moving to the older 10tbs), or might even try to figure out how to have it fill up my oldest 50k hour 10tb last as the newest data will be the most easily replaceable. However it seems that OMV6 doesn't have temperature monitoring for disks?? 1 of my disks was averaging 50c with spikes of up to 55c through summer with TrueNas. I'm hoping that by moving over to SnapRAID, not having the disks spinning constantly and only spinning up for access / scrubs / syncs will keep their average temperatures down. E: As a bit of an aside and more general question in terms of NAS practices, what are the main reasons not to have the NAS as part of your personal computer, instead keeping it a dedicated box? For example, I see no reason why I couldn't achieve what I'm looking to do using DrivePool and SnapRAID on my Windows 11 computer and just shoving the 6 disks into my case. Shrimp or Shrimps fucked around with this message at 02:04 on Nov 11, 2023 |
# ? Nov 11, 2023 01:51 |
|
How many nvme m2 slots do modern mobos tend to have now? Say, a mid-range mobo for a Ryzen 7700 or so, or Intel equivalent.
|
# ? Nov 11, 2023 16:38 |
|
PirateBob posted:How many nvme m2 slots do modern mobos tend to have now? Say, a mid-range mobo for a Ryzen 7700 or so, or Intel equivalent. Between 2-4. Varies on motherboard form factor and chipset.
|
# ? Nov 11, 2023 16:43 |
|
Shrimp or Shrimps posted:E: As a bit of an aside and more general question in terms of NAS practices, what are the main reasons not to have the NAS as part of your personal computer, instead keeping it a dedicated box? For example, I see no reason why I couldn't achieve what I'm looking to do using DrivePool and SnapRAID on my Windows 11 computer and just shoving the 6 disks into my case.
|
# ? Nov 11, 2023 17:10 |
|
A big chunk of the benefit of a dedicated NAS comes down to whether or not anyone but you is relying on it and the services it provides. If it's literally just you then no big deal that you reboot your computer whenever you want. If someone else is using it / watching a movie hosted on it / whatever, and you reboot your desktop or it bluescreens or anything, now you're both dealing with it. The other main one is security. Your Windows desktop that you're using for day to day use has a lot more attack vectors for malware, and a lot more potential exposure to malware, than a dedicated NAS. This isn't to say that malware on your PC can't still cause problems on a mounted network share, but it's less likely - and there's things you can do on the NAS side to mitigate how much damage can be done that way.
|
# ? Nov 11, 2023 18:06 |
|
Dumb question: How necessary is a dedicated fan for a 3-drive cage in a case that otherwise has good-to-great airflow? I ask because the fan of the cage I bought bumps up against a part of the case and I am outside the return window for the cage. The drives should not be seeing much action. They will be hosting storage for a home LANCache, movies, music, photos, etc. No VM's, only two users accessing the drives.
|
# ? Nov 11, 2023 19:13 |
|
Kibner posted:Dumb question: How necessary is a dedicated fan for a 3-drive cage in a case that otherwise has good-to-great airflow? I ask because the fan of the cage I bought bumps up against a part of the case and I am outside the return window for the cage. If they're getting some airflow already it's not really that big a deal. You should use hwinfo64 to confirm the drive temps are acceptable. Also, I wouldn't be that concerned about the fan housing touching the HDD cage, if that's what you're describing.
|
# ? Nov 11, 2023 20:24 |
|
VelociBacon posted:If they're getting some airflow already it's not really that big a deal. You should use hwinfo64 to confirm the drive temps are acceptable. Yeah, it’s a fan screwed to the back of the drive cage.
|
# ? Nov 11, 2023 20:31 |
PirateBob posted:How many nvme m2 slots do modern mobos tend to have now? Say, a mid-range mobo for a Ryzen 7700 or so, or Intel equivalent. IOwnCalculus posted:A big chunk of the benefit of a dedicated NAS comes down to whether or not anyone but you is relying on it and the services it provides. If it's literally just you then no big deal that you reboot your computer whenever you want. If someone else is using it / watching a movie hosted on it / whatever, and you reboot your desktop or it bluescreens or anything, now you're both dealing with it. Kibner posted:Yeah, it’s a fan screwed to the back of the drive cage.
|
|
# ? Nov 11, 2023 20:43 |
|
Kibner posted:Yeah, it’s a fan screwed to the back of the drive cage. I'd probably still run it, assuming the fan fits and just contacts the case. But yeah if temps are fine it's kinda your call. Generally these fans also serve as intake fans so it's helping everything out to have more intake flow.
|
# ? Nov 11, 2023 21:08 |
|
Shrimp or Shrimps posted:
OMV does have temperature monitoring, it's in the S.M.A.R.T settings. You can alert when it's above a maximum value, or when it has increased by more than X degrees since the last check.
|
# ? Nov 11, 2023 21:09 |
|
Tamba posted:OMV does have temperature monitoring, it's in the S.M.A.R.T settings. Just make sure your notifications are setup and actually alert you. And one point in time I had drives simmering between 48-52c and I had no idea.
|
# ? Nov 11, 2023 22:11 |
|
VelociBacon posted:I'd probably still run it, assuming the fan fits and just contacts the case. But yeah if temps are fine it's kinda your call. Generally these fans also serve as intake fans so it's helping everything out to have more intake flow. Yeah, I'm going to need to dremel some more material away to do that, which is fine. I'll do that tomorrow. Should also be able to begin assembly. Will definitely take pictures when it's put together. FYI, if anyone wants to build their own home server/NAS combo, just get the Silverstone CS382. It has 8 hot-swappable 3.5" bays on the front with room for 3 more internal drives. Much less fuss than what I have having to do with my FT02 and much smaller.
|
# ? Nov 12, 2023 00:01 |
Kibner posted:Yeah, I'm going to need to dremel some more material away to do that, which is fine. I'll do that tomorrow. Should also be able to begin assembly. Will definitely take pictures when it's put together. They basically fixed a huge amount of the issues I had with the original. Annoying, availability is absolutely poo poo in Denmark.
|
|
# ? Nov 12, 2023 00:12 |
|
I want one, but would need some proper fan control for those 92mm fans.
|
# ? Nov 12, 2023 00:22 |
|
The DS380 was great because it was small, half the volume of that other case...but yes, flaws. I did see some creative duct work done inside of them to get airflow where it needs to be.
|
# ? Nov 12, 2023 00:36 |
Wibla posted:I want one, but would need some proper fan control for those 92mm fans. Moey posted:The DS380 was great because it was small, half the volume of that other case...but yes, flaws. One of those builds guaranteed to appease the blood gods of IT.
|
|
# ? Nov 12, 2023 00:46 |
|
Wibla posted:I want one, but would need some proper fan control for those 92mm fans. Eyeballing it, but it looks like it has PWM fans. So, you could just plug them into your motherboard if the drive cage itself doesn't expose the control to you. If they aren't, you could also get a couple Arctic F9 PWM PST CO fans to replace them.
|
# ? Nov 12, 2023 01:01 |
|
Kibner posted:Yeah, I'm going to need to dremel some more material away to do that, which is fine. I'll do that tomorrow. Should also be able to begin assembly. Will definitely take pictures when it's put together. I was looking at Jonsbo N3, 8 bays in a pretty small footprint and people seem to like the N2. https://nascompares.com/review/the-jonsbo-n3-nas-case-review/ https://www.newegg.com/black-jonsbo-n3-mini-itx/p/2AM-006A-000E1
|
# ? Nov 12, 2023 01:53 |
|
Interesting. If I wasn’t trying to reuse my atx board and other hardware, I’d have considered that.
|
# ? Nov 12, 2023 05:05 |
|
Kibner posted:Interesting. If I wasn’t trying to reuse my atx board and other hardware, I’d have considered that. the silverstone cs382 you posted is micro-atx, not atx
|
# ? Nov 12, 2023 08:41 |
|
HalloKitty posted:the silverstone cs382 you posted is micro-atx, not atx Yeah, which is also why I didn’t buy it and am modding my FT02, instead. I realize I explained myself poorly.
|
# ? Nov 12, 2023 12:09 |
|
Kibner posted:Yeah, which is also why I didn’t buy it and am modding my FT02, instead. I realize I explained myself poorly. D'oh, sorry, I skimmed and wanted to make sure you didn't buy the wrong thing by mistake
|
# ? Nov 12, 2023 13:41 |
|
Tamba posted:OMV does have temperature monitoring, it's in the S.M.A.R.T settings. Oh yes I've seen that now. Having looked through some videos of OMV, it seems that it's not really lacking for all that much when compared to Truenas. From the way I'd read people talking about it on Reddit, I was expecting it to seem unfinished and janky but honestly it's going to do everything I need it to and way more that I don't need it to. Have decided to forego MergerFS and instead just deal with 4 mounted disks, and manually put files where I want them, after having read that when accessing the shared mergerfs pool via NFS, that all drives were spinning up. I'm not sure if the same behavior will happen over SMB, but I don't want it to and then have to tear it all down and restart.
|
# ? Nov 12, 2023 13:52 |
|
The cs382 seems a bit unnecessarily short – any fans for extraction at the top conflict with using pcie slots. It would seem to rule out a NAS build where you need a GPU + HBA for the drives + 10GBe card. Would a few extract centimetres for clearance have really hurt the design?
|
# ? Nov 12, 2023 14:19 |
|
|
# ? May 28, 2024 14:42 |
|
Shrimp or Shrimps posted:Oh yes I've seen that now. Having looked through some videos of OMV, it seems that it's not really lacking for all that much when compared to Truenas. From the way I'd read people talking about it on Reddit, I was expecting it to seem unfinished and janky but honestly it's going to do everything I need it to and way more that I don't need it to. Have decided to forego MergerFS and instead just deal with 4 mounted disks, and manually put files where I want them, after having read that when accessing the shared mergerfs pool via NFS, that all drives were spinning up. I'm not sure if the same behavior will happen over SMB, but I don't want it to and then have to tear it all down and restart. I switched from FreeNAS to OMV5 and have been using it since then, with ZFS. Can't really say that I've missed anything from FreeNAS, and the OMV5 --> OMV6 upgrade went through without any issues. I especially like the recent-ish update to the OMV extras where docker compose is just in the OMV UI now (it used to just install and configure Portainer for you and made you use that).
|
# ? Nov 12, 2023 14:32 |