|
Guess I'm a bit late telling people that the Toshiba 12TB drives are at an all-time low of $240 because there was a promo code going at Newegg but yeah.... I'm going with those for my NAS even though they're 7200 RPM drives and use more power and all that. Because I'd rather not have my drives die every other year like I've had with my Easystores across several chassis and setups by now while my Samsung and other Toshiba drives have had zero issues.
|
# ? Nov 10, 2021 19:29 |
|
|
# ? Jun 10, 2024 07:50 |
|
Baronash posted:I took a look at this, and I think it'll be my fallback option, but it seems to be missing some of the features I would like. I'm looking for something that works a bit more like the Google Drive desktop application, but with my own file server. The features I'm really after are: -The windows nextcloud client supports virtual files to download on demand. I haven't checked to see how it handles sync conflicts though. -Onlyoffice can integrate with nextcloud for live collaborative editing or offline editing. One big downside is that write permissions are all or nothing, so you can't let somebody comment or suggest changes without giving them access to change everything. I think this might be a next cloud limitation instead of onlyoffice. e: Nextcloud has a lot of apps of various levels of usuability that plug into it. To me, the most useful is the cookbook (thanks firetora). It can scrape many recipe blocks and save them nicely on one page. For example, I can give it this link: https://thewoksoflife.com/garlic-noodles/ and get this: CopperHound fucked around with this message at 20:42 on Nov 10, 2021 |
# ? Nov 10, 2021 20:34 |
|
anyone know if TrueNAS supports Plex GPU transcoding? is there a 'best value' option for a cheap card that would do H265? i assume one of the weedier quadros would be a good pick right? (obviously cheap is relative with GPU prices being as hosed as they are; i guess 'cheap' in this context means 'cheaper than replacing my entire gen8 microserver for something with a better CPU')
|
# ? Nov 11, 2021 13:02 |
I'm not sure what it has to do with TrueNAS (though I can't exclude that they do something which might make it harder, which wouldn't be the first time), but this plex support article mentions a minimum Plex version that needs to be met for it to work, though it might just be iGPU because of the following. Looking at mpv(1), nvdec seems to require CUDA, which nvidia has deliberately left out of the FreeBSD driver for reasons that they've never bothered explaining, and vdpau seems to be Linux-only (despite vdpau existing in FreeBSD ports). If you don't need the video to be output from the server, one thing you might be able to do is replace the CPU with a CPU from the Ivy Bridge era with a iGPU that has QuickSync and doesn't go above 35W TDP (which is what the passive cooling is designed for) and supports ECC (since I assume you have that, but you can remove that if you don't). If you do need the video to be output from the server, the only option is to somehow find an AMD GPU that can fit. In either case, you're looking at trying to make the drm-kmod package run on TrueNAS somehow, and then unhide the relevant devices in devfs through the TrueNAS GUI, since the Plex jail needs to be able to use them to get access to the relevant hardware registers. BlankSystemDaemon fucked around with this message at 15:04 on Nov 11, 2021 |
|
# ? Nov 11, 2021 14:46 |
|
These might be useful for you as well: https://www.elpamsoft.com/?p=Plex-Hardware-Transcoding This shows you how many streams each particular model is capable of. https://en.wikipedia.org/wiki/Nvidia_NVENC The chart in here shows you which generations are capable of encoding what. E: forgot the special sauce to bypass artificial restrictions on nvidia drivers: https://github.com/keylase/nvidia-patch Gonna Send It fucked around with this message at 04:17 on Nov 12, 2021 |
# ? Nov 11, 2021 15:08 |
|
So annoying my old gpu is a gtx970. It does encode quickly but quality is ehhh without the nvenc stuff. Probably will pull it out of my unraid box again because the fans are the noisiest thing in the case.
|
# ? Nov 11, 2021 20:08 |
|
My NAS has been out of use, because up until recently, the various subscription services and other content availability had been relatively constant and reliable, and I didn't see a need for one. The infighting between the different VODs and content disappearing, and my playlists on Youtube and elsewhere developing more holes than a fermenting swiss cheese, I guess it's gonna find some use again for archiving purposes. What's the situation with mechanical drives presently? Are there high capacity drives that don't do this shingled stuff? Are dual actuator drives common/affordable nowdays? The ideal set up right now would be two 8TB dual actuator drives in a ZFS mirror (with an L2ARC), to have some performance headroom to shuffle games onto an iSCSI drive. --edit: Also, is there meanwhile some successor to iSCSI? I keep hearing the term NVMe-oF, but I suppose there won't be any software based solutions? --edit: There's software targets for Linux and initiators for Windows, and they allow me to use RDMA on my Mellanox cards. That's nice. Combat Pretzel fucked around with this message at 03:22 on Nov 13, 2021 |
# ? Nov 13, 2021 03:15 |
Combat Pretzel posted:My NAS has been out of use, because up until recently, the various subscription services and other content availability had been relatively constant and reliable, and I didn't see a need for one. The infighting between the different VODs and content disappearing, and my playlists on Youtube and elsewhere developing more holes than a fermenting swiss cheese, I guess it's gonna find some use again for archiving purposes. Dual-actuator drives are primarily a way to increase IOPS for high density bulk storage where non-volatile flash simply isn't an option - I'm not even sure it can be bought off the shelves yet, so the prices are insane. The CAM target layer in FreeBSD has [url="https://freshbsd.org/freebsd/src?q=cam%2Fctl&author[]=Alexander+Motin+%28mav%29"]seen a fair few improvements from Alexander Motin[/url] which should make it possible to serve traffic at high speed. As far as I understand it, there's nothing preventing NVMeoF from being done entirely in software. I just don't know if anyone's working on it at a present.
|
|
# ? Nov 13, 2021 03:30 |
|
BlankSystemDaemon posted:As far as I understand it, there's nothing preventing NVMeoF from being done entirely in software. I just don't know if anyone's working on it at a present. Too bad about dual actuator drives. That'd have been a nice to have.
|
# ? Nov 13, 2021 03:52 |
|
Do we you have a thread about self hosted cloud alternative stuff? I know there is the Plex thread, but I'm wondering if we have a place to discuss stuff like nextcloud, sync thing, all the poor substitutes for Google photos, and collaboration tools like only office.
|
# ? Nov 14, 2021 17:06 |
CopperHound posted:Do we you have a thread about self hosted cloud alternative stuff?
|
|
# ? Nov 14, 2021 17:08 |
|
BlankSystemDaemon posted:It might be nice to have a thread for self-hosted stuff, so why not create one?
|
# ? Nov 14, 2021 17:33 |
|
Doesn't the home lab thread kinda cover that? Though I guess its more hardware focused. I'd bookmark it if you made it though
|
# ? Nov 14, 2021 19:51 |
|
CopperHound posted:Do we you have a thread about self hosted cloud alternative stuff? Yes please. I recently bought a server and want to try out all this stuff but I'm intimidated and a little dumb.
|
# ? Nov 15, 2021 07:53 |
|
A Bag of Milk posted:Yes please. I recently bought a server and want to try out all this stuff but I'm intimidated and a little dumb. https://forums.somethingawful.com/showthread.php?threadid=3985071 No straight up how-to yet, but maybe will give you some ideas about what to do.
|
# ? Nov 15, 2021 08:00 |
|
UuuUuuUUuuuu So what happens when you have a month of daily snapshots with a replication task to mirror that whole volume/dataset/snapshots to another FreeNAS box, and then you miss about 4 months? I finally managed to get the replication task going again. But I'm sorta left wondering how it's gonna handle the replication. It's not going to replicate the whole 20tb volume again is it? Hahahahah gently caress it totally is. drat.
|
# ? Nov 15, 2021 21:46 |
|
If there was a still a snapshot common to source and destination, I'd have thought it would send only blocks changed since that snapshot.
|
# ? Nov 15, 2021 23:04 |
|
Zorak of Michigan posted:If there was a still a snapshot common to source and destination, I'd have thought it would send only blocks changed since that snapshot. I was kinda hoping the same but judging at the rate that percent value is goin' up. I think we're doin the full pull here. I think I did some napkin math and it looks like about 40 hours over jigabit. This could have been avoided if FreeNAS just told me that the replication failed. It had always done it in the past so idk what happened. I guess it's on me for not noticing that I wasn't getting emails about the backup server's scrubs. When I finally noticed the server was just hung on a black screen and didn't even show on my network manager. So I guess FreeNAS could maybe reach it, but not push anything to it. idk I don't get it. It's in the past now. Still very confused. Hopefully the replication completes fine. Then I'll update, switch to SATA SSDs from USB, and move to TrueNAS.
|
# ? Nov 16, 2021 00:19 |
|
Oh son of a gently caress. I thought by pointing it at the same dataset that the last replication used to be on that it would just settle everything by deleting old snaphots/starting new/etc. But it's just filling up the volume even though the old data is still "there". Oh FreeNAS what you doin. Why you do dis. Why don't your numbers add up. Ziploc fucked around with this message at 01:14 on Nov 16, 2021 |
# ? Nov 16, 2021 01:11 |
|
CopperHound posted:I made a post: Yes that gives me quite a bit to experiment with, thanks for the thread
|
# ? Nov 16, 2021 01:37 |
Ziploc posted:UuuUuuUUuuuu Whether FreeNAS/TrueNAS exposes this, I don't know - so you'll have to find that out. Ziploc posted:Oh son of a gently caress. I thought by pointing it at the same dataset that the last replication used to be on that it would just settle everything by deleting old snaphots/starting new/etc. But it's just filling up the volume even though the old data is still "there". If not, remember that ZFS is pooled storage with datasets sharing that pool equally.
|
|
# ? Nov 16, 2021 10:39 |
|
Well it filled the target up to 100% and then failed the replication. So instead of wasting time with no backup, I just deleted the target dataset and let it start from 0. Though the only way I saw to cancel a replication task was to restart the target and then disable the replication task. Hopefully it's smart enough to start the hell over after. Starting to get just a touch nervous.
|
# ? Nov 16, 2021 15:33 |
I don't know enough about FreeNAS to explain why it's doing that; I use FreeBSD and it works great there.
|
|
# ? Nov 16, 2021 17:13 |
|
BlankSystemDaemon posted:I use FreeBSD YOU DON'T SAY
|
# ? Nov 16, 2021 17:17 |
How do you know if someone at your party uses FreeBSD?
|
|
# ? Nov 16, 2021 17:20 |
That Works posted:How do you know if someone at your party uses FreeBSD?
|
|
# ? Nov 16, 2021 17:36 |
|
I appreciate your comments, it's interesting to hear about *BSD, and the dedicated thread died on its arse and disappeared.
|
# ? Nov 17, 2021 20:10 |
I was just being self-deprecating, don't worry about it.
|
|
# ? Nov 17, 2021 20:30 |
|
ive never used bsd but i also enjoy the bsd-posting
|
# ? Nov 18, 2021 02:28 |
VostokProgram posted:ive never used bsd but i also enjoy the bsd-posting
|
|
# ? Nov 18, 2021 18:12 |
|
BlankSystemDaemon posted:One of the central pieces of ZFS is that it supports bit-level incremental send and receive, as documented in the zfs-send(8) and zfs-receive(8) manual pages - and the best part is, it's got absolutely nothing to do with the snapshots, so you can set it up at any time irrespective of when you took the snapshots. I forgot that I meant to ask about this. I see that they added bookmarks as an alternative to snapshots on the source side. Is that what you mean by "absolutely nothing to do with the snapshots"? Or is there some way I'm not grasping from the docs to generate an incremental stream without getting tangled up in any of that? My understanding is that you needed snapshots, or now bookmarks, to delineate the start and finish of the replication stream, and that the starting point snapshot had to exist on the destination side in order to receive an incremental snapshot. Otherwise, without that starting point where the source and destination are known to be the same at time x, sending the difference between time x and time y doesn't give you something useful. Ziploc (or other posters, I guess), if you ever get tangled up like that again and have a chance to mess with it, I'd love to get down to the command prompt level and try to look at the ZFS underpinnings of the replication process and see what's gone wrong. I used to be a Solaris geek at work and did a lot of tinkering with zfs send and receive, but I have never had the chance to use FreeNAS' replication function.
|
# ? Nov 18, 2021 20:44 |
|
vaguely pondering the idea of colo'ing a 1U server for personal/family/friends use (personal code hosting and DB, nextcloud, plex, etc). what would be good ZFS pool configurations for 12 drives? maybe a high-reliability pool and a space-efficient pool for bulk storage, and then a couple hot spares? or would 1 hot spare be enough with some mirrors, and I just hustle it over if there's a problem? maybe do 2 spares, 2+2 mirrored vdevs on the reliable pool, and then the remaining 6 drives go in a raidz2 or 3+3 raidz1 for bulk storage? Or just have a single mirrored vdev and then go 8-drive raidz2 or 4+4 raidz1 for the bulk pool? or maybe just have one big pool, and do 2+2+2+2+2 or 10-drive raidz2 or a pair of 5-drive raidz1s? I suppose anything really important I could back up to home, or to backblaze anyway. I could probably have a M.2 for boot and an optane M.2 for SLOG in addition, and could add a fair number of M.2s in ultra-low-profile PCIe adapter cards, so no need for super high performance on the HDD pools.
|
# ? Nov 18, 2021 21:17 |
|
My zpool is colocated and is a mess nobody should ever replicate but I wouldn't do more than a single spare, and if you're limited to 12 drive bays then I'd keep that spare out of the server and do four three-drive raidz1s. Just hustle and/or power down the box until you can get to it if you have a drive poo poo the bed. The only times I've had mass drive failures were either a family of drives all failing due to the same problem (and even then, with adjacent serial numbers, they failed over a period of months) or a bus compatibility problem (I have some WD30EFRX drives that apparently spray poo poo all over my SAS controller, because if the WD30EFRX drives are busy then ZFS randomly fails any drive in the pool).
|
# ? Nov 18, 2021 21:59 |
|
Paul MaudDib posted:vaguely pondering the idea of colo'ing a 1U server for personal/family/friends use (personal code hosting and DB, nextcloud, plex, etc). I'm dubious about the value of different levels of reliability. If you lose data, you're going to be eating a scolding from friends and family, and who wants that? Unless you're really in a tough spot for affording the amount of space you need, I would say mirrored M.2 for your boot media, 11-disk raidz2 for data, one hot spare. If the data is really important, 11-disk raidz3. I actually feel safer with a larger raidz2 pool than a smaller raidz1 pool. Be sure to configure automatic periodic scrubs and keep an eye on the status of the pool.
|
# ? Nov 18, 2021 22:12 |
|
Well poo poo Google is cancelling G Suite Business (of course they are) and migrating it to Google Workspaces which loses the unlimited cloud storage... what are the recommended options for hosting 12+ TB of backups on the cheap?
|
# ? Nov 18, 2021 22:17 |
|
Mr. Crow posted:Well poo poo Google is cancelling G Suite Business (of course they are) and migrating it to Google Workspaces which loses the unlimited cloud storage... what are the recommended options for hosting 12+ TB of backups on the cheap? backblaze b2 seems to be the cheapest actual metered service (well, $0.005 per gb-mon vs $0.004 for Glacier, but if you ever need to actually retrieve glacier you're going to eat poo poo)
|
# ? Nov 18, 2021 22:19 |
Zorak of Michigan posted:I forgot that I meant to ask about this. I see that they added bookmarks as an alternative to snapshots on the source side. Is that what you mean by "absolutely nothing to do with the snapshots"? Or is there some way I'm not grasping from the docs to generate an incremental stream without getting tangled up in any of that? My understanding is that you needed snapshots, or now bookmarks, to delineate the start and finish of the replication stream, and that the starting point snapshot had to exist on the destination side in order to receive an incremental snapshot. Otherwise, without that starting point where the source and destination are known to be the same at time x, sending the difference between time x and time y doesn't give you something useful. I see that it's also another instance of implicit documentation, instead of being explicit - so I guess I have to fix that too. Paul MaudDib posted:vaguely pondering the idea of colo'ing a 1U server for personal/family/friends use (personal code hosting and DB, nextcloud, plex, etc). For my local offline backup, I'm doing 15-disk vdevs with raidz3 - but if you're doing local online backup, you might wanna go with a different configuration. Zorak of Michigan posted:I'm dubious about the value of different levels of reliability. If you lose data, you're going to be eating a scolding from friends and family, and who wants that? Unless you're really in a tough spot for affording the amount of space you need, I would say mirrored M.2 for your boot media, 11-disk raidz2 for data, one hot spare. If the data is really important, 11-disk raidz3. I actually feel safer with a larger raidz2 pool than a smaller raidz1 pool. Be sure to configure automatic periodic scrubs and keep an eye on the status of the pool. The biggest difference here is that the resilver speed of ZFS is a LOT quicker than traditional hardware RAID (with the latter being ~10MBps and ZFS easily doing 100MBps in my experience. Mr. Crow posted:Well poo poo Google is cancelling G Suite Business (of course they are) and migrating it to Google Workspaces which loses the unlimited cloud storage... what are the recommended options for hosting 12+ TB of backups on the cheap? I mean, it's Google we're talking about - of course they were going to pull it. About the only thing they're not ever going to kill is their ad business, because that's basically the only thing making them money.
|
|
# ? Nov 18, 2021 22:41 |
|
Paul MaudDib posted:backblaze b2 seems to be the cheapest actual metered service (well, $0.005 per gb-mon vs $0.004 for Glacier, but if you ever need to actually retrieve glacier you're going to eat poo poo) When I relocated my office and changed all the IT infrastructure I went with Backblaze. Backups for $1.83 per month instead of $180 per month on Azure.
|
# ? Nov 19, 2021 05:37 |
|
BlankSystemDaemon posted:
Somewhat true, depending on how you lose the disks. Total failure, yes. Partial failure like one drive dead and an unrecoverable sector on another in the same vdev, ZFS can probably pull most of your data out of the fire. Main reason I like having lots of raidz1 vdevs is the ability to grow the array without having to replace every drive at once. The downside of this is that you inevitably end up with unbalanced vdevs, but so far I've been able to manage this without block pointer rewrite by upgrading the most full vdev and then playing musical disks the whole way down so that each vdev has a similar-ish amount of free space. If you aren't concerned about expanding by just upgrading three or four drives at a time, then yeah, just go with a single raidz2 or raidz3, depending on how much you hate the idea of reacquiring Linux ISOs.
|
# ? Nov 19, 2021 06:40 |
|
|
# ? Jun 10, 2024 07:50 |
|
Mr. Crow posted:Well poo poo Google is cancelling G Suite Business (of course they are) and migrating it to Google Workspaces which loses the unlimited cloud storage... what are the recommended options for hosting 12+ TB of backups on the cheap? Google Cloud? I think it's 0.004/gb/mo now. Less if you want to do less redundancy.
|
# ? Nov 19, 2021 07:01 |