Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Guess I'm a bit late telling people that the Toshiba 12TB drives are at an all-time low of $240 because there was a promo code going at Newegg but yeah.... I'm going with those for my NAS even though they're 7200 RPM drives and use more power and all that. Because I'd rather not have my drives die every other year like I've had with my Easystores across several chassis and setups by now while my Samsung and other Toshiba drives have had zero issues.

Adbot
ADBOT LOVES YOU

CopperHound
Feb 14, 2012

Baronash posted:

I took a look at this, and I think it'll be my fallback option, but it seems to be missing some of the features I would like. I'm looking for something that works a bit more like the Google Drive desktop application, but with my own file server. The features I'm really after are:
1. Showing up while offline, so that I can browse a cached version of the folder structure and save files to it that'll sync when I reconnect.
2. Keeps files off the client machine but makes it easy to select specific files or folders that should be available offline.

Syncthing seems to do the first, but only by creating a complete local copy of everything in the shared folder.
You can get most of the way there with nextcloud and onlyoffice:
-The windows nextcloud client supports virtual files to download on demand. I haven't checked to see how it handles sync conflicts though.
-Onlyoffice can integrate with nextcloud for live collaborative editing or offline editing. One big downside is that write permissions are all or nothing, so you can't let somebody comment or suggest changes without giving them access to change everything. I think this might be a next cloud limitation instead of onlyoffice.


e: Nextcloud has a lot of apps of various levels of usuability that plug into it. To me, the most useful is the cookbook (thanks firetora). It can scrape many recipe blocks and save them nicely on one page.
For example, I can give it this link:
https://thewoksoflife.com/garlic-noodles/
and get this:

CopperHound fucked around with this message at 20:42 on Nov 10, 2021

Generic Monk
Oct 31, 2011

anyone know if TrueNAS supports Plex GPU transcoding? is there a 'best value' option for a cheap card that would do H265? i assume one of the weedier quadros would be a good pick right?

(obviously cheap is relative with GPU prices being as hosed as they are; i guess 'cheap' in this context means 'cheaper than replacing my entire gen8 microserver for something with a better CPU')

BlankSystemDaemon
Mar 13, 2009




I'm not sure what it has to do with TrueNAS (though I can't exclude that they do something which might make it harder, which wouldn't be the first time), but this plex support article mentions a minimum Plex version that needs to be met for it to work, though it might just be iGPU because of the following.
Looking at mpv(1), nvdec seems to require CUDA, which nvidia has deliberately left out of the FreeBSD driver for reasons that they've never bothered explaining, and vdpau seems to be Linux-only (despite vdpau existing in FreeBSD ports).

If you don't need the video to be output from the server, one thing you might be able to do is replace the CPU with a CPU from the Ivy Bridge era with a iGPU that has QuickSync and doesn't go above 35W TDP (which is what the passive cooling is designed for) and supports ECC (since I assume you have that, but you can remove that if you don't).
If you do need the video to be output from the server, the only option is to somehow find an AMD GPU that can fit.

In either case, you're looking at trying to make the drm-kmod package run on TrueNAS somehow, and then unhide the relevant devices in devfs through the TrueNAS GUI, since the Plex jail needs to be able to use them to get access to the relevant hardware registers.

BlankSystemDaemon fucked around with this message at 15:04 on Nov 11, 2021

Gonna Send It
Jul 8, 2010
These might be useful for you as well:

https://www.elpamsoft.com/?p=Plex-Hardware-Transcoding
This shows you how many streams each particular model is capable of.

https://en.wikipedia.org/wiki/Nvidia_NVENC
The chart in here shows you which generations are capable of encoding what.

E: forgot the special sauce to bypass artificial restrictions on nvidia drivers: https://github.com/keylase/nvidia-patch

Gonna Send It fucked around with this message at 04:17 on Nov 12, 2021

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
So annoying my old gpu is a gtx970. It does encode quickly but quality is ehhh without the nvenc stuff. Probably will pull it out of my unraid box again because the fans are the noisiest thing in the case.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
My NAS has been out of use, because up until recently, the various subscription services and other content availability had been relatively constant and reliable, and I didn't see a need for one. The infighting between the different VODs and content disappearing, and my playlists on Youtube and elsewhere developing more holes than a fermenting swiss cheese, I guess it's gonna find some use again for archiving purposes.

What's the situation with mechanical drives presently? Are there high capacity drives that don't do this shingled stuff? Are dual actuator drives common/affordable nowdays? The ideal set up right now would be two 8TB dual actuator drives in a ZFS mirror (with an L2ARC), to have some performance headroom to shuffle games onto an iSCSI drive.

--edit:
Also, is there meanwhile some successor to iSCSI? I keep hearing the term NVMe-oF, but I suppose there won't be any software based solutions?
--edit:
There's software targets for Linux and initiators for Windows, and they allow me to use RDMA on my Mellanox cards. That's nice.

Combat Pretzel fucked around with this message at 03:22 on Nov 13, 2021

BlankSystemDaemon
Mar 13, 2009




Combat Pretzel posted:

My NAS has been out of use, because up until recently, the various subscription services and other content availability had been relatively constant and reliable, and I didn't see a need for one. The infighting between the different VODs and content disappearing, and my playlists on Youtube and elsewhere developing more holes than a fermenting swiss cheese, I guess it's gonna find some use again for archiving purposes.

What's the situation with mechanical drives presently? Are there high capacity drives that don't do this shingled stuff? Are dual actuator drives common/affordable nowdays? The ideal set up right now would be two 8TB dual actuator drives in a ZFS mirror (with an L2ARC), to have some performance headroom to shuffle games onto an iSCSI drive.

--edit:
Also, is there meanwhile some successor to iSCSI? I keep hearing the term NVMe-oF, but I suppose there won't be any software based solutions?
The best way to get cheap drives is to shuck 8TB WD external disks, and that's also the only way to currently avoid SMR drives submarined into existing product lines.

Dual-actuator drives are primarily a way to increase IOPS for high density bulk storage where non-volatile flash simply isn't an option - I'm not even sure it can be bought off the shelves yet, so the prices are insane.

The CAM target layer in FreeBSD has [url="https://freshbsd.org/freebsd/src?q=cam%2Fctl&author[]=Alexander+Motin+%28mav%29"]seen a fair few improvements from Alexander Motin[/url] which should make it possible to serve traffic at high speed.

As far as I understand it, there's nothing preventing NVMeoF from being done entirely in software. I just don't know if anyone's working on it at a present.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

BlankSystemDaemon posted:

As far as I understand it, there's nothing preventing NVMeoF from being done entirely in software. I just don't know if anyone's working on it at a present.
Linux seems to have built-in support in the kernel and Starwind has a free initiator for Windows, both also supporting RDMA. I feel latter is kind of important, because back then I used iSCSI without RDMA, because of lack of options doing so, and performance was meh, especially at random IO. I suspected all the traversal through the TCP/IP stack one reason (and based on benchmarks people reported from enterprise systems actually using RDMA).

Too bad about dual actuator drives. That'd have been a nice to have.

CopperHound
Feb 14, 2012

Do we you have a thread about self hosted cloud alternative stuff?

I know there is the Plex thread, but I'm wondering if we have a place to discuss stuff like nextcloud, sync thing, all the poor substitutes for Google photos, and collaboration tools like only office.

BlankSystemDaemon
Mar 13, 2009




CopperHound posted:

Do we you have a thread about self hosted cloud alternative stuff?

I know there is the Plex thread, but I'm wondering if we have a place to discuss stuff like nextcloud, sync thing, all the poor substitutes for Google photos, and collaboration tools like only office.
It might be nice to have a thread for self-hosted stuff, so why not create one?

CopperHound
Feb 14, 2012

BlankSystemDaemon posted:

It might be nice to have a thread for self-hosted stuff, so why not create one?
I'll see what I can put together this evening.

Mr. Crow
May 22, 2008

Snap City mayor for life
Doesn't the home lab thread kinda cover that? Though I guess its more hardware focused.

I'd bookmark it if you made it though

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.

CopperHound posted:

Do we you have a thread about self hosted cloud alternative stuff?

I know there is the Plex thread, but I'm wondering if we have a place to discuss stuff like nextcloud, sync thing, all the poor substitutes for Google photos, and collaboration tools like only office.

Yes please. I recently bought a server and want to try out all this stuff but I'm intimidated and a little dumb.

CopperHound
Feb 14, 2012

A Bag of Milk posted:

Yes please. I recently bought a server and want to try out all this stuff but I'm intimidated and a little dumb.
I made a post:
https://forums.somethingawful.com/showthread.php?threadid=3985071

No straight up how-to yet, but maybe will give you some ideas about what to do.

Ziploc
Sep 19, 2006
MX-5
UuuUuuUUuuuu

So what happens when you have a month of daily snapshots with a replication task to mirror that whole volume/dataset/snapshots to another FreeNAS box, and then you miss about 4 months? I finally managed to get the replication task going again. But I'm sorta left wondering how it's gonna handle the replication. It's not going to replicate the whole 20tb volume again is it?

Hahahahah gently caress it totally is. drat.

Zorak of Michigan
Jun 10, 2006


If there was a still a snapshot common to source and destination, I'd have thought it would send only blocks changed since that snapshot.

Ziploc
Sep 19, 2006
MX-5

Zorak of Michigan posted:

If there was a still a snapshot common to source and destination, I'd have thought it would send only blocks changed since that snapshot.

I was kinda hoping the same but judging at the rate that percent value is goin' up. I think we're doin the full pull here. I think I did some napkin math and it looks like about 40 hours over jigabit.

This could have been avoided if FreeNAS just told me that the replication failed. It had always done it in the past so idk what happened. I guess it's on me for not noticing that I wasn't getting emails about the backup server's scrubs. When I finally noticed the server was just hung on a black screen and didn't even show on my network manager. So I guess FreeNAS could maybe reach it, but not push anything to it. idk I don't get it. It's in the past now. Still very confused.

Hopefully the replication completes fine. Then I'll update, switch to SATA SSDs from USB, and move to TrueNAS.

Ziploc
Sep 19, 2006
MX-5
Oh son of a gently caress. I thought by pointing it at the same dataset that the last replication used to be on that it would just settle everything by deleting old snaphots/starting new/etc. But it's just filling up the volume even though the old data is still "there".

Oh FreeNAS what you doin. Why you do dis. Why don't your numbers add up.

Ziploc fucked around with this message at 01:14 on Nov 16, 2021

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.

CopperHound posted:

I made a post:
https://forums.somethingawful.com/showthread.php?threadid=3985071

No straight up how-to yet, but maybe will give you some ideas about what to do.

Yes that gives me quite a bit to experiment with, thanks for the thread

BlankSystemDaemon
Mar 13, 2009




Ziploc posted:

UuuUuuUUuuuu

So what happens when you have a month of daily snapshots with a replication task to mirror that whole volume/dataset/snapshots to another FreeNAS box, and then you miss about 4 months? I finally managed to get the replication task going again. But I'm sorta left wondering how it's gonna handle the replication. It's not going to replicate the whole 20tb volume again is it?

Hahahahah gently caress it totally is. drat.
One of the central pieces of ZFS is that it supports bit-level incremental send and receive, as documented in the zfs-send(8) and zfs-receive(8) manual pages - and the best part is, it's got absolutely nothing to do with the snapshots, so you can set it up at any time irrespective of when you took the snapshots.
Whether FreeNAS/TrueNAS exposes this, I don't know - so you'll have to find that out.

Ziploc posted:

Oh son of a gently caress. I thought by pointing it at the same dataset that the last replication used to be on that it would just settle everything by deleting old snaphots/starting new/etc. But it's just filling up the volume even though the old data is still "there".

Oh FreeNAS what you doin. Why you do dis. Why don't your numbers add up.


I think this is a known bug when quotas and reservations aren't being used?
If not, remember that ZFS is pooled storage with datasets sharing that pool equally.

Ziploc
Sep 19, 2006
MX-5
Well it filled the target up to 100% and then failed the replication. So instead of wasting time with no backup, I just deleted the target dataset and let it start from 0.

Though the only way I saw to cancel a replication task was to restart the target and then disable the replication task.

Hopefully it's smart enough to start the hell over after.

Starting to get just a touch nervous.

BlankSystemDaemon
Mar 13, 2009




I don't know enough about FreeNAS to explain why it's doing that; I use FreeBSD and it works great there.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!


YOU DON'T SAY

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


How do you know if someone at your party uses FreeBSD?

BlankSystemDaemon
Mar 13, 2009




That Works posted:

How do you know if someone at your party uses FreeBSD?
They won't shut up about it.

Dead Goon
Dec 13, 2002

No Obvious Flaws



I appreciate your comments, it's interesting to hear about *BSD, and the dedicated thread died on its arse and disappeared.

BlankSystemDaemon
Mar 13, 2009




I was just being self-deprecating, don't worry about it.

Yaoi Gagarin
Feb 20, 2014

ive never used bsd but i also enjoy the bsd-posting

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


VostokProgram posted:

ive never used bsd but i also enjoy the bsd-posting

Zorak of Michigan
Jun 10, 2006


BlankSystemDaemon posted:

One of the central pieces of ZFS is that it supports bit-level incremental send and receive, as documented in the zfs-send(8) and zfs-receive(8) manual pages - and the best part is, it's got absolutely nothing to do with the snapshots, so you can set it up at any time irrespective of when you took the snapshots.
Whether FreeNAS/TrueNAS exposes this, I don't know - so you'll have to find that out.

I forgot that I meant to ask about this. I see that they added bookmarks as an alternative to snapshots on the source side. Is that what you mean by "absolutely nothing to do with the snapshots"? Or is there some way I'm not grasping from the docs to generate an incremental stream without getting tangled up in any of that? My understanding is that you needed snapshots, or now bookmarks, to delineate the start and finish of the replication stream, and that the starting point snapshot had to exist on the destination side in order to receive an incremental snapshot. Otherwise, without that starting point where the source and destination are known to be the same at time x, sending the difference between time x and time y doesn't give you something useful.

Ziploc (or other posters, I guess), if you ever get tangled up like that again and have a chance to mess with it, I'd love to get down to the command prompt level and try to look at the ZFS underpinnings of the replication process and see what's gone wrong. I used to be a Solaris geek at work and did a lot of tinkering with zfs send and receive, but I have never had the chance to use FreeNAS' replication function.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
vaguely pondering the idea of colo'ing a 1U server for personal/family/friends use (personal code hosting and DB, nextcloud, plex, etc).

what would be good ZFS pool configurations for 12 drives? maybe a high-reliability pool and a space-efficient pool for bulk storage, and then a couple hot spares? or would 1 hot spare be enough with some mirrors, and I just hustle it over if there's a problem?

maybe do 2 spares, 2+2 mirrored vdevs on the reliable pool, and then the remaining 6 drives go in a raidz2 or 3+3 raidz1 for bulk storage? Or just have a single mirrored vdev and then go 8-drive raidz2 or 4+4 raidz1 for the bulk pool?

or maybe just have one big pool, and do 2+2+2+2+2 or 10-drive raidz2 or a pair of 5-drive raidz1s? I suppose anything really important I could back up to home, or to backblaze anyway.

I could probably have a M.2 for boot and an optane M.2 for SLOG in addition, and could add a fair number of M.2s in ultra-low-profile PCIe adapter cards, so no need for super high performance on the HDD pools.

IOwnCalculus
Apr 2, 2003





My zpool is colocated and is a mess nobody should ever replicate but I wouldn't do more than a single spare, and if you're limited to 12 drive bays then I'd keep that spare out of the server and do four three-drive raidz1s. Just hustle and/or power down the box until you can get to it if you have a drive poo poo the bed.

The only times I've had mass drive failures were either a family of drives all failing due to the same problem (and even then, with adjacent serial numbers, they failed over a period of months) or a bus compatibility problem (I have some WD30EFRX drives that apparently spray poo poo all over my SAS controller, because if the WD30EFRX drives are busy then ZFS randomly fails any drive in the pool).

Zorak of Michigan
Jun 10, 2006


Paul MaudDib posted:

vaguely pondering the idea of colo'ing a 1U server for personal/family/friends use (personal code hosting and DB, nextcloud, plex, etc).

what would be good ZFS pool configurations for 12 drives? maybe a high-reliability pool and a space-efficient pool for bulk storage, and then a couple hot spares? or would 1 hot spare be enough with some mirrors, and I just hustle it over if there's a problem?

maybe do 2 spares, 2+2 mirrored vdevs on the reliable pool, and then the remaining 6 drives go in a raidz2 or 3+3 raidz1 for bulk storage? Or just have a single mirrored vdev and then go 8-drive raidz2 or 4+4 raidz1 for the bulk pool?

or maybe just have one big pool, and do 2+2+2+2+2 or 10-drive raidz2 or a pair of 5-drive raidz1s? I suppose anything really important I could back up to home, or to backblaze anyway.

I could probably have a M.2 for boot and an optane M.2 for SLOG in addition, and could add a fair number of M.2s in ultra-low-profile PCIe adapter cards, so no need for super high performance on the HDD pools.

I'm dubious about the value of different levels of reliability. If you lose data, you're going to be eating a scolding from friends and family, and who wants that? Unless you're really in a tough spot for affording the amount of space you need, I would say mirrored M.2 for your boot media, 11-disk raidz2 for data, one hot spare. If the data is really important, 11-disk raidz3. I actually feel safer with a larger raidz2 pool than a smaller raidz1 pool. Be sure to configure automatic periodic scrubs and keep an eye on the status of the pool.

Mr. Crow
May 22, 2008

Snap City mayor for life
Well poo poo Google is cancelling G Suite Business (of course they are) and migrating it to Google Workspaces which loses the unlimited cloud storage... what are the recommended options for hosting 12+ TB of backups on the cheap?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Mr. Crow posted:

Well poo poo Google is cancelling G Suite Business (of course they are) and migrating it to Google Workspaces which loses the unlimited cloud storage... what are the recommended options for hosting 12+ TB of backups on the cheap?

backblaze b2 seems to be the cheapest actual metered service (well, $0.005 per gb-mon vs $0.004 for Glacier, but if you ever need to actually retrieve glacier you're going to eat poo poo)

BlankSystemDaemon
Mar 13, 2009




Zorak of Michigan posted:

I forgot that I meant to ask about this. I see that they added bookmarks as an alternative to snapshots on the source side. Is that what you mean by "absolutely nothing to do with the snapshots"? Or is there some way I'm not grasping from the docs to generate an incremental stream without getting tangled up in any of that? My understanding is that you needed snapshots, or now bookmarks, to delineate the start and finish of the replication stream, and that the starting point snapshot had to exist on the destination side in order to receive an incremental snapshot. Otherwise, without that starting point where the source and destination are known to be the same at time x, sending the difference between time x and time y doesn't give you something useful.

Ziploc (or other posters, I guess), if you ever get tangled up like that again and have a chance to mess with it, I'd love to get down to the command prompt level and try to look at the ZFS underpinnings of the replication process and see what's gone wrong. I used to be a Solaris geek at work and did a lot of tinkering with zfs send and receive, but I have never had the chance to use FreeNAS' replication function.
I've not used ZFS bookmarks because my storage server is on 12.x as I'm quite conservative with using OpenZFS for permanent storage - but Chris Siebenmann has a rather good article that explains it - it's basically a tradeoff between performance and space.

I see that it's also another instance of implicit documentation, instead of being explicit - so I guess I have to fix that too. :sigh:

Paul MaudDib posted:

vaguely pondering the idea of colo'ing a 1U server for personal/family/friends use (personal code hosting and DB, nextcloud, plex, etc).

what would be good ZFS pool configurations for 12 drives? maybe a high-reliability pool and a space-efficient pool for bulk storage, and then a couple hot spares? or would 1 hot spare be enough with some mirrors, and I just hustle it over if there's a problem?

maybe do 2 spares, 2+2 mirrored vdevs on the reliable pool, and then the remaining 6 drives go in a raidz2 or 3+3 raidz1 for bulk storage? Or just have a single mirrored vdev and then go 8-drive raidz2 or 4+4 raidz1 for the bulk pool?

or maybe just have one big pool, and do 2+2+2+2+2 or 10-drive raidz2 or a pair of 5-drive raidz1s? I suppose anything really important I could back up to home, or to backblaze anyway.

I could probably have a M.2 for boot and an optane M.2 for SLOG in addition, and could add a fair number of M.2s in ultra-low-profile PCIe adapter cards, so no need for super high performance on the HDD pools.
The real benefit of raidz3 is that it's P+R+Q meaning you can lose any three disks whereas with striped mirrors if you lose two specific disks, you've lost the entire array.

For my local offline backup, I'm doing 15-disk vdevs with raidz3 - but if you're doing local online backup, you might wanna go with a different configuration.

Zorak of Michigan posted:

I'm dubious about the value of different levels of reliability. If you lose data, you're going to be eating a scolding from friends and family, and who wants that? Unless you're really in a tough spot for affording the amount of space you need, I would say mirrored M.2 for your boot media, 11-disk raidz2 for data, one hot spare. If the data is really important, 11-disk raidz3. I actually feel safer with a larger raidz2 pool than a smaller raidz1 pool. Be sure to configure automatic periodic scrubs and keep an eye on the status of the pool.
So, this is going to get pretty far into the weeds, but when we're talking about RAID arrays there's a term related to MTBF which is Mean-Time-Between-Data-Loss - and there are calculators that can help figure out these things depending on MTBF specs (as well as real-world numbers) of the disks you use.
The biggest difference here is that the resilver speed of ZFS is a LOT quicker than traditional hardware RAID (with the latter being ~10MBps and ZFS easily doing 100MBps in my experience.

Mr. Crow posted:

Well poo poo Google is cancelling G Suite Business (of course they are) and migrating it to Google Workspaces which loses the unlimited cloud storage... what are the recommended options for hosting 12+ TB of backups on the cheap?
And you hadn't been planning for this? :ohdear:
I mean, it's Google we're talking about - of course they were going to pull it. About the only thing they're not ever going to kill is their ad business, because that's basically the only thing making them money.

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

Paul MaudDib posted:

backblaze b2 seems to be the cheapest actual metered service (well, $0.005 per gb-mon vs $0.004 for Glacier, but if you ever need to actually retrieve glacier you're going to eat poo poo)

When I relocated my office and changed all the IT infrastructure I went with Backblaze. Backups for $1.83 per month instead of $180 per month on Azure.

IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:


The real benefit of raidz3 is that it's P+R+Q meaning you can lose any three disks whereas with striped mirrors if you lose two specific disks, you've lost the entire array.



Somewhat true, depending on how you lose the disks. Total failure, yes. Partial failure like one drive dead and an unrecoverable sector on another in the same vdev, ZFS can probably pull most of your data out of the fire.

Main reason I like having lots of raidz1 vdevs is the ability to grow the array without having to replace every drive at once. The downside of this is that you inevitably end up with unbalanced vdevs, but so far I've been able to manage this without block pointer rewrite by upgrading the most full vdev and then playing musical disks the whole way down so that each vdev has a similar-ish amount of free space.

If you aren't concerned about expanding by just upgrading three or four drives at a time, then yeah, just go with a single raidz2 or raidz3, depending on how much you hate the idea of reacquiring Linux ISOs.

Adbot
ADBOT LOVES YOU

FunOne
Aug 20, 2000
I am a slimey vat of concentrated stupidity

Fun Shoe

Mr. Crow posted:

Well poo poo Google is cancelling G Suite Business (of course they are) and migrating it to Google Workspaces which loses the unlimited cloud storage... what are the recommended options for hosting 12+ TB of backups on the cheap?

Google Cloud? I think it's 0.004/gb/mo now. Less if you want to do less redundancy.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply