|
Historically NFS has had weak security functionality. It obviously supported user level privileges but the low level stuff was pretty laissez-faire. It was only intended for use locally within a network where it was protected by the secure perimeter. NFSv4 introduced a ton ofbetter security; host verification, encryption, better authentication support, etc.
|
# ? Apr 28, 2023 15:12 |
|
|
# ? May 30, 2024 12:21 |
|
SlowBloke posted:You are right, I used the wrong term. It's just that most of the low end filers i currently handle can also do objects and my brain autocompleted. Talking as a complete profane, does making an iscsi lun on freenas or truenas let you keep using snapshotting options? Cause if that's not the case, op will need to keep that in mind. You can still take a snapshot of a ZFS dataset that contains a volume, and IIRC a volume is what you present as the backend for iSCSI. Edit: However, since ZFS has no understanding of the contents of a volume, all you can do with the snapshot is revert or clone the entire volume. You'd have to present the clone to a client to mount it, browse it, and pull a file out. The thing I'd really dislike about that is that if you do end up with a few unrecoverable bad blocks on a ZFS dataset, ZFS can tell you what files are on those blocks. You can replace the bad disk, remove or replace the files that had bad blocks, and then your pool is healthy again. If the entire dataset is one big volume, even one unrecoverable bad block means your volume is bad, and you're stuck hoping that fsck or chkdsk can take care of you. Zorak of Michigan fucked around with this message at 16:56 on Apr 28, 2023 |
# ? Apr 28, 2023 16:53 |
SlowBloke posted:You are right, I used the wrong term. It's just that most of the low end filers i currently handle can also do objects and my brain autocompleted. Talking as a complete profane, does making an iscsi lun on freenas or truenas let you keep using snapshotting options? Cause if that's not the case, op will need to keep that in mind. EDIT: To add a bit of technical explanation, in case anyone's curious - a snapshot is just a pointer to the transaction group that the snapshot was written to disk with, and because ZFS uses a tree of hashes, it points to all hashes prior to the snapshot. This is also what causes ZFS to not be able to clean blocks before all references have been removed - so for example if you delete a file and it's present in a prior snapshot, it's still physically on the disk. Previously, this mean that you couldn't easily use incremental zfs send|receive because you can't delete those transaction groups on the source - but with zfs bookmark you can get the space used up by snapshots back, without sacrificing the ability to do incremental transfers. Pablo Bluth posted:Historically NFS has had weak security functionality. It obviously supported user level privileges but the low level stuff was pretty laissez-faire. It was only intended for use locally within a network where it was protected by the secure perimeter. NFSv4 introduced a ton ofbetter security; host verification, encryption, better authentication support, etc. If you set it up properly, you have better security than SMB, and more importantly accountability as part of a full-AAA setup. Another way to get that now is to use NFS over TLS with AES-GCM, and the only way you're going to do that currently is if you're running FreeBSD-CURRENT, because that's the only place it's implemented (big props to Rick Macklem, who's been working on NFS since the 80s in one form or another). It's supposed to be added in NFSv4.3, I think. What NFSv4 did was add NFSv4 ACLs (which Linux still doesn't support) with user@domain.tld instead of UID+GID, and they're intentionally designed to be broadly compatible for everything including Windows. The other major thing NFSv4 did is move locking, RPC, and everything else onto a single TCP port (port 2049), so that combined with NFS over TLS, you can NFS on the internet. NFSv4.1 added pNFS (parallel NFS), and NFSv4.2 added write holes, server-side copies, and a bunch of other nice-to-have things. BlankSystemDaemon fucked around with this message at 18:59 on Apr 28, 2023 |
|
# ? Apr 28, 2023 18:32 |
|
With the one downside of needing working kerberos. Don't get me wrong, kerberos is good and nice ... but I will forever have an aversion after trying to get NFSv4 to work using a Windows AD domain where I'm not a domain admin as the kerberos server. Malus points for having the densest concentration of "not working but also giving me zero useful feedback" of any software I've tried to make use of, and I'm including Windows in that.
|
# ? Apr 28, 2023 18:50 |
Computer viking posted:With the one downside of needing working kerberos. Don't get me wrong, kerberos is good and nice ... but I will forever have an aversion after trying to get NFSv4 to work using a Windows AD domain where I'm not a domain admin as the kerberos server. Malus points for having the densest concentration of "not working but also giving me zero useful feedback" of any software I've tried to make use of, and I'm including Windows in that. The real changes are in the wiki, and are intended to end up in the handbook. If I recall correctly, you were running into trouble because you didn't control all of the server infrastructure - which is always going to get you into all sorts of edge-cases.
|
|
# ? Apr 28, 2023 19:00 |
I don't want to deploy and maintain an entire separate server for Kerberos auth. I don't want to try and wrap my head around domain administration for active directory for a small home lab or media server. I just want a simple username/password. It boggles the mind that NFS doesn't support this as a simple low level gate to access the share and then run everything else like it has after that. This is why everyone uses SMB.
|
|
# ? Apr 28, 2023 19:06 |
|
Last time I tried to do an AD authenticated Linux fileserver setup I wound up just giving up and looking for a third party solution to do it for me. I think it was Centrify? The primary issue was around crypto limitations where AD had a bunch of routines not supposed by the Linux client drivers except if you downgraded to something like DES or MD4, but this was all like 15 years ago now and I hope to dear sweet Baby Jesus that it's changed for the better since.
|
# ? Apr 28, 2023 19:13 |
|
Late for the power consumption talk, but I finally hooked up my new Eaton UPS today and got to see my server's wattage for the first time. The system has an X570D4I-2T motherboard, Ryzen 3600 CPU, 64 GB of ECC RAM, and 8 x 14TB WD white label drives. After booting up ESXi and turning on the TrueNAS CORE VM it was idling at 80-90 watt, and then after also starting up the Ubuntu VM with all my services that seemed to bring it up into the 90-100 watt range. Oh, and this is with an UAP-AC-PRO connected to the UPS as well. I guess that's a bit of money per year at current energy prices, but the consumption was lower than my guesstimates and I'm quite happy with it. bawfuls posted:Yeah where I live 100W load 24/7/365 would cost about $500 a year Where is this? That sounds absolutely bonkers. Keito fucked around with this message at 19:24 on Apr 28, 2023 |
# ? Apr 28, 2023 19:22 |
|
BlankSystemDaemon posted:If I recall correctly, you were running into trouble because you didn't control all of the server infrastructure - which is always going to get you into all sorts of edge-cases. That's certainly the biggest single problem, yes. A specific annoyance within that is that I guess the "join a machine to the domain" permission for AD does something similar to generating a keytab on the domain controller - which is why I can get a samba server to use kerberos authentication, but not set up NFSv4 (since running kadmin on a domain controller is not something I'll ever be allowed near). It's unfair to blame kerberos and NFS for Microsoft and work policies, but it is frustrating. And "requiring command line work with high access on the domain controller to set up a file server" is not fantastic no matter why.
|
# ? Apr 28, 2023 19:34 |
|
Wibla posted:I'd be more concerned about running 8TB barracudas in a fileserver. Get drives rated for NAS use. Made some alterations based on advice given by the thread: EDIT: If someone stumbles upon this, the Gigabyte H610I DDR4 requires a BIOS update before supporting Intel gen 13. It does not support Q-Flash Plus (CPU-less BIOS flashing). quote:PCPartPicker Part List: https://pcpartpicker.com/list/zMj3rD * Added an NVMe * Switched to an Exos drive * Switching to the smaller Node 304 case * Switched to a different small PSU for the Gold rating, not the higher wattage. TengenNewsEditor fucked around with this message at 14:24 on May 7, 2023 |
# ? Apr 28, 2023 20:17 |
|
Keito posted:Where is this? That sounds absolutely bonkers. The combined generation & transmission rates on my last bill were between $0.52-0.63/kWh
|
# ? Apr 28, 2023 22:53 |
|
TengenNewsEditor posted:Made some alterations based on advice given by the thread: Given how cheap ram is now, why not get a 32gb kit? for about $5 more you could get the teamgroup kit from newegg
|
# ? Apr 29, 2023 01:06 |
|
TengenNewsEditor posted:Made some alterations based on advice given by the thread: Just finished my build in the node 304, my second time building in that case and I'm still very happy with it. Some funny business if your PSU has the power inlet towards the left edge but otherwise hard to complain. This is basically me upgrading the NAS I've mentioned a couple times in this thread, but I just replaced every component. i5-11600k with thermalright peerless assassin cooler Asrock H510M-ITX/ac mobo Corsair Vengeance LPX 3200mhz DDR4 16gb EVGA 550 B5 PSU Drives are just a 256gb SSD and a 12tb WD Red. Really like this PSU for this application - it was around $120 CAD and it's (crucially) only 150mm which helped with cable management. It's also fully modular, and it has eco mode (turns fan off if not needed). I built the other PC with a 980ti in this same case and I think I had a semi modular 155mm PSU and it was quite challenging to do a good job of managing the cables. Also has a 5 year warranty which I think is quite good for that price. I have to ask, is it a thing that WD Red drives are more loud when they're higher capacity? I'm coming from a WD Red 4tb and it was basically silent, this WD Red 12tb will occasionally chonk in a way I haven't heard since I was a kid with early HDDs. There's nothing important on it, but I'm wondering if I maybe have a bad drive or if possibly there's just more platters in a 12tb and more arms and mass moving around and I just hear it a bit more. I have all the fans on silent profiles and the PC is completely quiet except for the HDD. VelociBacon fucked around with this message at 02:33 on Apr 29, 2023 |
# ? Apr 29, 2023 02:25 |
|
drat that’s clean! I’m for sure not posting mine now lol
|
# ? Apr 29, 2023 03:55 |
|
Wild EEPROM posted:Given how cheap ram is now, why not get a 32gb kit? for about $5 more you could get the teamgroup kit from newegg drat, RAM really is cheap now, thanks for the heads up. VelociBacon posted:Just finished my build in the node 304, my second time building in that case and I'm still very happy with it. Some funny business if your PSU has the power inlet towards the left edge but otherwise hard to complain. This is basically me upgrading the NAS I've mentioned a couple times in this thread, but I just replaced every component. I switched to the Node 304 - in large part - due to your posts in this thread!
|
# ? Apr 29, 2023 04:13 |
|
I know folks on the TrueNAS forums think going enterprise or nothing is the way to go. I was going to order one of the Topton N6005 boards but there seems to be some debate on their overall performance because of the lack of lanes. Does anyone have any recommendations forsomething that isn't quite "$900 supermicro" motherboard and cheap consumer junk that would run truenas?
|
# ? Apr 29, 2023 04:29 |
|
Beve Stuscemi posted:drat that’s clean! I’m for sure not posting mine now lol No way man let's see!
|
# ? Apr 29, 2023 04:38 |
|
KKKLIP ART posted:I know folks on the TrueNAS forums think going enterprise or nothing is the way to go. I was going to order one of the Topton N6005 boards but there seems to be some debate on their overall performance because of the lack of lanes. Does anyone have any recommendations forsomething that isn't quite "$900 supermicro" motherboard and cheap consumer junk that would run truenas? any of the current gen celerons should be more than ample for most not insane user count NAS needs you can get as crazy as you want with them really, from a fanless setup with a USB JBOD enclosure to a mini itx board in a NAS case with a bazillion sata ports the current iteration of quicksync on them is very capable and can do 3-4 concurrent 4k transcodes on plex all on a box that won’t break 30-40w going full tilt, and will idle in the single digits I’ll eventually replace the ML30 with one someday
|
# ? Apr 29, 2023 05:42 |
|
e.pilot posted:any of the current gen celerons should be more than ample for most not insane user count NAS needs I’m kind of stuck using TrueNAS because I had the Atom C3*** series board from ASROCKRACK that had the flaw in it and now it won’t boot and I’d really love my data. I think I found an LGA1151 Supermicro board with an M2 a lot that I can use for the OS and 4 SATA ports that I might use with something like an i3 9300 or so. Just have to get the ram situation figured out. So I'm going to use Amazon links because it is easier and I might need some help with picking out RAM (I did check the QVL but couldn't come up with a ton...) Motherboard: ASRock Rack C246 WSI Mini-ITX Processor: Intel Core i3 9100 (Seems like it supports ECC and HEVC stuff but if there is a better option I am open to suggestions) RAM: I would love 16-32GB Boot Drive: WD Blue NVME 500GB Data Drives: Looking more at the Seagate Iron Wolfs rather than WD Reds. I don't think I want to shuck stuff, but looking for about 16 or so total TB. Need to get my old pool back up to see how much total data I have. Case: Fractal Node 804 Power Supply: I like SFX because they are small but also open to something fully modular to minimize cables and most importantly quiet. Fans/CPU Cooler: Also something super quiet if possible. E: I realize now that the M.2 is M.2 2242, so I might just get a plain-jane SATA SSD instead. I want to order the Occulink to 4x SATA cable and use that rather than bending around 4 SATA cables. Any suggestions for RAM, PSU, and cooler and fan with the goal of maximum quiet? Does this seem reasonable? KKKLIP ART fucked around with this message at 15:09 on Apr 29, 2023 |
# ? Apr 29, 2023 13:05 |
|
Keito posted:Late for the power consumption talk, but I finally hooked up my new Eaton UPS today and got to see my server's wattage for the first time. Holy smokes. 120TB is a lot of space.
|
# ? Apr 29, 2023 13:35 |
|
Is there any easy way to tell what stick of ram is throwing constant "Correctable ECC - Asserted" on an asrock rack board? I went from 1 stick of ram to 6 on my board and ran memtest on it before booting into truenas but now i get ECC errors about once a day. I ran memtest on it again and nothing. The IPMI won't tell me what stick it is either.
|
# ? Apr 29, 2023 15:58 |
|
Has anyone ever rolled the dice on no-brand disks from "MaxDigitalData" or https://www.goharddrive.com/ ? I'm guessing they're used, but used >10TB disks with a 5 year warranty seem fine for under $10/TB. It'd be less work than shucking externals, too. https://www.amazon.com/MDD-MDD14TSATA25672E-256MB-Internal-Enterprise/dp/B0BYTVCWM9 Price is right, goods are suspect, I bet they're used Seagate.
|
# ? Apr 29, 2023 16:25 |
|
OK, I'm using TrueNAS for the first time, and I'm having trouble getting basic SMB sharing going. Here's what I've got: A Pool consisting of two Vdevs made of 500GB and 1TB SATA drives Under that pool is a dataset called "SSD Pool Dataset", I dont really understand what datasets are yet The Pool ACL, specifically giving datauser full access I've created a user called Datauser that I plan to use to log into the SMB share remotely I have an SMB Share, sharing the pool The Share ACL I'm getting access denied when I try to write to the share with a remote computer. I dont know enough yet to know what I did wrong though
|
# ? Apr 29, 2023 18:27 |
|
Wait, I was trying to write to SSD Pool, should I be trying to write to SSD Pool Dataset? Because that does actually work
|
# ? Apr 29, 2023 18:39 |
|
Twerk from Home posted:Has anyone ever rolled the dice on no-brand disks from "MaxDigitalData" or https://www.goharddrive.com/ ? I bought some HGST 8TB drives from Goharddrive via Newegg a few years ago. Two of them were DOA but they replaced those (UPS had beaten the poo poo out of the box anyway). One died a couple years later and they issued a return/refund for it no problem. I've since been buying just straight up used 10TB SAS drives for in the range of $8/TB, and I just keep two hot spares in the system.
|
# ? Apr 29, 2023 18:44 |
|
Unraid has been changing some terminology in their latest RC and introduced "exclusive shares" which are there to enable ZFS pools that don't use shfs. Looks like it's going to implement things to make it more viable to handle ZFS. Now if their pool import semantics weren't that sketch (currently)...Beve Stuscemi posted:Wait, I was trying to write to SSD Pool, should I be trying to write to SSD Pool Dataset? Because that does actually work
|
# ? Apr 29, 2023 19:56 |
|
I like thinking about datasets as more akin to directories with a lot of attached settings - it makes it easier to explain how all datasets in a pool share the same free space. (Unless you use quotas to limit how large they can grow).
|
# ? Apr 29, 2023 20:20 |
|
Beve Stuscemi posted:Wait, I was trying to write to SSD Pool, should I be trying to write to SSD Pool Dataset? Because that does actually work Mine is set up so SMB shares datasets under the pool, so there's "pool", under that is "documents", and "documents" has SMB sharing on it. So basically yes, do that.
|
# ? Apr 29, 2023 20:46 |
|
Ok I think I had it correct the whole time and just didn’t know where to put data
|
# ? Apr 29, 2023 21:18 |
|
Beve Stuscemi posted:drat that’s clean! I’m for sure not posting mine now lol Messy builds rule, this was my initial build and it worked great until I needed more than three drives, lol And that's not "here it is with the case open so you can see the parts", that's how it sat on the shelf. The full ATX board in an mATX case meant I couldn't put the cover on. Scruff McGruff fucked around with this message at 06:00 on Apr 30, 2023 |
# ? Apr 30, 2023 05:57 |
|
Only way that could've been better is if you cut a hole in the side of the case to clear the mobo
|
# ? Apr 30, 2023 07:56 |
|
Is anyone using GlusterFS at home, whether as part of FreeNAS Scale or just on some Linux distro? What has your experience been? The layering that FreeNAS Scale wants to do feels bad to me, where you'll have ZFS on each host, and then GlusterFS doing redundancy across the hosts. Is there a way to use GlusterFS for everything and have each disk be managed directly by gluster instead? I see that it needs an underlying filesystem, the docs use XFS in simpler examples.
|
# ? May 3, 2023 16:31 |
|
Twerk from Home posted:Is anyone using GlusterFS at home, whether as part of FreeNAS Scale or just on some Linux distro? What has your experience been? AFAIK Gluster is just the thing that runs on top of a filesystem. So no, I don't think so. Same with Lustre and Ceph as well IIRC.
|
# ? May 4, 2023 07:50 |
|
Mr Shiny Pants posted:AFAIK Gluster is just the thing that runs on top of a filesystem. So no, I don't think so. Same with Lustre and Ceph as well IIRC. I janitor Ceph at work, they have their own native storage that works with raw block devices called Bluestore that's been default for years: https://docs.ceph.com/en/latest/rados/configuration/storage-devices/ Ceph also does its own redundancy, you feed it whole non-redundant plain disks and it self-heals as disks die. No need to even go swap them. I was hoping that I could feed Gluster disks too, and not have to manage a lower layer. Lustre specifically was designed to work on top of expensive hardware RAID controllers with two controllers for HA, which is not something I want to put up with.
|
# ? May 4, 2023 14:29 |
|
Our old Windows based HTPC/Fileserver is finally dying, so it's finally time to upgrade to something more modern. Other than file storage, the main 2 services needed are:
One thing I'd really like is to move to a setup where we a) have a disk for redundancy (whether that be via proprietary NAS solution, RAID5 or something like TrueNAS/Unraid), and b) have all the disks appear as a single storage drive. Right now we have nothing in this regard, just 4 HDDs totaling 19TB mapped to separate drive letters (lol). As far as transcoding goes, my roommate has a 75" 4k HDR TV and a high end AVR 5.2.4 Atmos setup, so we regularly download high quality 4k HDR rips for movies where it's warranted. The 4k transcoding + HDR tonemapping can be a lot for some CPUs, but my understanding is recent Intel chips (including Celerons) have "Quick Sync Video" which Plex is able to use to hardware transcode these types of files. Currently we have to store separate 1080p versions of any of our 4k HDR rips as our older AMD CPU can't transcode them fast enough (and Plex hardware transcoding only seems to support nVidia and Intel QSV), but it would be preferable if the new setup could handle transcoding those files in real time. For video playback (Plex & streaming services) we have a Shield Pro, so we're set there as the Shield is the only thing I've found that can actually properly passthrough all the movie audio formats (including all the DTS formats and Dolby TrueHD) with proper HDR switching other than properly configured MPC-HC. For music though while we sometimes use the Shield (via casting or apps), we still often use the PC via keyboard/mouse in some circumstances as it still tends to be easier/faster than the alternatives. Especially when we want music playing over a muted video or if we have people over and everyone wants to queue stuff up on different platforms (e.g. YouTube, Soundcloud, Spotify). Possibly the Android TV experience for this has been improved more recently but I have my doubts. We also tend to use the PC to manage qBittorrent. We do have the web GUI enabled but IIRC the web GUI is a bit clunky compared to the desktop app and if we're already at the couch it's just easier to use the computer. I'm more committed to going headless at the TV (e.g. just a headless NAS box and the Shield), but my I think my roommate will be more resistant (and ultimately this will be his setup to keep whenever I move). Even ignoring the music use cases, I think there will still be a "want" to manage torrents from the couch rather than from a computer in another room, so even if we go with a headless NAS, we'll probably pickup a cheap Celeron based mini PC or laptop for the TV area. My initial thought is this:
One thing that rubs me somewhat wrong with this setup is having both the NAS and Mini PC with processors powerful enough to Plex transcode 4k HDR seems a bit wasteful and redundant. But as far as I can tell there aren't really any good 4 bay NAS options that are cheaper enough than a DS423+ to justify whatever downgrade in features or quality they may have. Power consumption is something I'd like to minimize. Noise is a concern aswell (and my understanding is the little consumer NAS boxes will be louder than a custom PC build), but worst case we can move a noisy NAS to a closet or something. Our home networking is also still on 1 Gbit so we don't need 2.5 or 10 Gbit ethernet support. Ultimately I'd like to keep setup and janitoring headaches to a minimum and don't mind paying extra to keep things easy at this point. Other options I'm considering:
Idea with both of these alternatives is to combine the PC and NAS into a single unit. My understanding (which I have little confidence in) is that with something like TrueNAS or Unraid to get the "PC" part we'd run TrueNAS or Unraid headless and then use a Windows or Linux VM outputting to the iGPU. The Windows option is just there as while it's less capable than a NAS OS, it's a simpler setup and probably "good enough" for us. I think these options would use more power and need a bit more janitoring than a Synology setup, but at least I would be able to build these almost silent. Anyone have any thoughts or recommendations on which approach to go with as well as hardware recommendations? Also, any recommendations for HDDs 12TB or larger? I'm looking at 16TB ($250) or 18TB ($280) Red Pros right now. Compared to those, EasyStores don't seem to be worth shucking currently with 14TB ($220) and 18TB ($285) being the best values. IronWolfs also seem to be a bit more than Red Pros currently as well. Splinter fucked around with this message at 04:58 on May 5, 2023 |
# ? May 5, 2023 00:55 |
|
current gen celeron mini pc from aliexpress (N5105 or N6005) and 4-5 bay JBOD and unraid modern quicksync is very good, like 3-4 simultaneous 4K transcodes good
|
# ? May 5, 2023 15:35 |
|
Twerk from Home posted:I janitor Ceph at work, they have their own native storage that works with raw block devices called Bluestore that's been default for years: https://docs.ceph.com/en/latest/rados/configuration/storage-devices/ I wasn't even aware of Bluestore.... thanks.
|
# ? May 5, 2023 17:16 |
|
e.pilot posted:current gen celeron mini pc from aliexpress (N5105 or N6005) and 4-5 bay JBOD and unraid Not sure why I didn't consider this from the start. Was too fixated on either an external enclosure NAS or DIY PC with everything in a single unit. Would something like a TerraMaster D5-300 be a solid option for this? Could be used as JBOD or it can do RAID5 on its own as well. Does look like it actually consumes more power under access than the DS423+ though. Also 80mm fans vs 92mm but noise spec is still low so maybe this doesn't matter.
|
# ? May 5, 2023 20:22 |
|
Splinter posted:Not sure why I didn't consider this from the start. Was too fixated on either an external enclosure NAS or DIY PC with everything in a single unit. For about the same price as that 5-bay TerraMaster enclosure you could get two of these Mediasonic 4-bay ones that are USB 3.2 gen 2 (10Gbps)
|
# ? May 5, 2023 20:51 |
|
|
# ? May 30, 2024 12:21 |
|
SamDabbers posted:For about the same price as that 5-bay TerraMaster enclosure you could get two of these Mediasonic 4-bay ones that are USB 3.2 gen 2 (10Gbps) yeah this, most of the USB JBOD stuff is more or less the same at the consumer level these days, USB->SATA is pretty solved commodity tech at this point
|
# ? May 5, 2023 21:27 |