Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:

With ZFS, there's nothing preventing you from replacing it with an NVMe SSD using the zpool replace command.
ZFS doesn't give a gently caress about what driver you're using, nor what the disk is.

I expect that Generic Monk accidentally added it as a new single-disk vdev.

Adbot
ADBOT LOVES YOU

Yaoi Gagarin
Feb 20, 2014

It should probably throw a bunch of warnings at you if you try to make a special vdev without redundancy.

Generic Monk
Oct 31, 2011

BlankSystemDaemon posted:

I'm interested to learn how Generic Monk managed to learn about the 'special' vdev, without learning that it should always have its own redundancy via mirroring (or even n-way mirroring), though.
All the documentation I know of makes a big deal out of making it very explicit.

I saw it on a youtube video!!!!!!! I may have not even watched the entire video, I think I just heard that you could cache small files using that feature and thought it was a way to get some kind of tiered storage thing.

I know I could mirror it but honestly I can't be arsed; all the irreplacable stuff on here is backed up. Most of the stuff on here is movies and tv shows served over 1gbit, so it's not really affecting performance and there was no reason for me to enable it in the first place really. That goes for L2ARC and ZIL as well but you bet your arse I enabled those as well without really knowing what they did or that I wouldn't see any benefit at all.

I wanna rejig the array and add another HDD since I have a free bay in this microserver; either I watch all the movies on here and nuke/rebuild it or it craps out and I rebuild it. Same diff lol

Computer viking
May 30, 2011
Now with less breakage.

Even over 1gbit you may benefit from the much faster metadata with the special vdev for things that look up a lot of filenames - like listing huge directories or searching by filename.

Generic Monk
Oct 31, 2011

Computer viking posted:

Even over 1gbit you may benefit from the much faster metadata with the special vdev for things that look up a lot of filenames - like listing huge directories or searching by filename.

Yeah, the latency improvement for those operations was one of the things that tempted me iirc. Anecdotally it seems to thrash the disks a lot less doing stuff like that. It would be interesting to do a test with it and without it, though can't do that for obvious reasons.

I don't really have many ports to work with hence why I don't really want to gently caress about with doing a mirror, but I do have a free expansion slot. Do they do low profile PCIE cards that let you put 2 M.2s on one card? I seem to recall there being some weird limitation with it unless you spent a boatload.

IOwnCalculus posted:

I expect that Generic Monk accidentally added it as a new single-disk vdev.

Guilty but I suppose you can still mirror to another device to get the data across

BlankSystemDaemon
Mar 13, 2009



Whoever made that video needs to delete it, because it's almost deliberate misinformation to claim it's a "cache" and that you can use a single disk.

Korean Boomhauer
Sep 4, 2008

Computer viking posted:

Even over 1gbit you may benefit from the much faster metadata with the special vdev for things that look up a lot of filenames - like listing huge directories or searching by filename.

This would rule. Gonna set this up the moment it drops.

Computer viking
May 30, 2011
Now with less breakage.

I haven't benchmarked that, so it's just conjecture - but if it moves the directory entries to the SSD, I would certainly expect it to help.

Korean Boomhauer
Sep 4, 2008
Mostly I don't wanna thrash the spinning rust if I wanna search my NAS for something

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



I just have the content of my nas added to voidtools Everything, and despite it having to index (unlike local drives) it keeps up with changes and the search itself is instant.

Generic Monk
Oct 31, 2011

BlankSystemDaemon posted:

Whoever made that video needs to delete it, because it's almost deliberate misinformation to claim it's a "cache" and that you can use a single disk.

I think the misunderstanding was entirely on my part tbf. Or I just didn’t care! This is just a hobby project that lives under my sofa after all; all the valuable stuff on it is backed up


e: I think it was this one and he doesn’t mention it!!!!!! lol I like wendell as well

https://m.youtube.com/watch?v=QI4SnKAP6cQ&pp=ygUbemZzIG1ldGFkYXRhIHNvZWNpYWwgZGV2aWNl

still shoulda done my research tho

Generic Monk fucked around with this message at 19:47 on May 16, 2024

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
4:18 "it should have some level of redundancy".

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

4:18 "it should have some level of redundancy".
Even when demonstrating this with truncate'd files, I've used redundancy.
Wendel is a good enough educator that he should've known better.

Generic Monk
Oct 31, 2011

Combat Pretzel posted:

4:18 "it should have some level of redundancy".

Oops lol

YerDa Zabam
Aug 13, 2016



ah, the old load bearing "should"

bawfuls
Oct 28, 2009

is this the right thread to ask why my Sonarr is broken all of a sudden or is there a better place for that?

Inceltown
Aug 6, 2019

bawfuls posted:

is this the right thread to ask why my Sonarr is broken all of a sudden or is there a better place for that?

I'm running the latest stable right now and it seems to be working. Maybe try their GH page to see if someone else is having your issue.

8-bit Miniboss
May 24, 2005

CORPO COPS CAME FOR MY :filez:
Definitely check logs, I've been hit with some DB corruption issues that essentially stops any function in Sonarr from working properly.

bawfuls
Oct 28, 2009

Not sure what I'm even looking for in the log files.

When Sonarr searches either automatically or manually, it sees no results. This is true for any show I've tried. But within Sonarr settings, when I test my indexers they all work. The indexers also work in Jackett, and in Radarr. When I search in Radarr I get results as expected. Some of these indexers are generic, like the pirate bay, so they are identical between Sonarr and Radarr.

The Sonarr log file shows a search attempt and just says no results found.

edit: It might actually be just for certain shows, which is something other people seem to have issue with. It might just be one show that has weird title formatting and another show that I haven't been able to find on indexers for a year.

bawfuls fucked around with this message at 01:52 on May 18, 2024

TenementFunster
Feb 20, 2003

The Cooler King

bawfuls posted:

is this the right thread to ask why my Sonarr is broken all of a sudden or is there a better place for that?
skill issue

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Is it possible to import a zfs pool originally created in TrueNAS 13 to Debian 12?

Computer viking
May 30, 2011
Now with less breakage.

fletcher posted:

Is it possible to import a zfs pool originally created in TrueNAS 13 to Debian 12?

Should be fine, unless the TrueNAS pool is using some cutting edge feature and the Debian version of OpenZFS is older. I don't think TrueNAS is super bleeding edge on the ZFS features, so I'd expect no problems.

Zorak of Michigan
Jun 10, 2006


Zfs has tools for getting either the version or, in newer zfs versions, the feature flags for a pool. You could then (try to) verify that the zfs version available for Debian 12 supports that version or those flags. The docs for zpool get should provide more information.

BlankSystemDaemon
Mar 13, 2009



ZFS has a -o compatibility flag on the zfs create command, and has had it for a few years.

Edit: It’s not fun, but if all else fails, it is possible to make a read-only import that even v28 might be able to support.

BlankSystemDaemon fucked around with this message at 01:51 on May 19, 2024

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
I have two 4TB drives in my windows PC that are both close to filling up. Rather than adding a third, I'd like to replace them with a single drive >=12TB. Unfortunately only drives labelled Enterprise or NAS go that large. Anyone have experience of using Exos or Ironwolf Pros as desktop drives? They offer nice TB/cost but I'm worried about noise.

Pablo Bluth fucked around with this message at 13:12 on May 19, 2024

kliras
Mar 27, 2021

Pablo Bluth posted:

I have two 4TB drives in my windows PC that are both close to filling up. Rather than adding a third, I'd like to replace them with a single drive >=12TB. Unfortunately only drives labelled Enterprise or NAS go that large. Anyone have experience of using Exos or Ironwolf Pros as desktop drives? They offer nice TB/cost but I'm worried about noise.
my 18tb ironwolf pro had an insufferable clicking noise from the the read head returning all the time, so i had to use keepalive to keep it busy

the issue probably isn't as bad once it fills up more, but the sound drove me nuts. i'm gonna disable keepalive for a bit to see if it's still an issue

haven't had that issue with wd that i recall, but that was also with much smaller drives that were probably filled up way more

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
Checking prices more carefully (current scan.co.uk prices), based on £/TB, I'll probably pick up another N300 series drive in 14 or 18TB. I have a pair in a desktop ZFS setup. The only thing more attractive is the Toshiba MG but I know the N300 are acceptable noise wise.



edit: apparently Toshiba do actually do a X300 Pro line that goes up to 22TB and is intended for desktops, but it's distribution seems bare minimum (and non-existence in Europe)

Pablo Bluth fucked around with this message at 14:29 on May 19, 2024

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Harik posted:

Hardest part is going to be data migration. 12tb of my current nas is welded into the current system via bcache on the boot ssd, so I can't move those without taking the whole thing offline. theoretically you can undo bcache but I'd prefer not to. So I'm really not sure. I can't really down them completely for a week to move stuff around...

the half-formed plan in my head right now is:
* bring up the new, empty array
* buy a second 40gbe DAC (I have another connectx-3 for it) so they're both on my 40gbe switch
* down the network share on the old machine
* export the drives via iscsi/nbd
* bring them up on the new machine as samba/nfs shares.
Doing it as raw block rather than filesystem gets rid of a lot of the overhead and lets the massively larger and faster RAM cache on the new box cut down on IOPS required?
* sync to the new array from a snapshot while leaving the shares writable.
* offline everything to sync the new changes.
* export the shares from the new array

There's various issues with the obvious plan of "just put all the drives in the new box". first is that bcache fuckup. second is I'd have to buy a lot of extra drive trays @$10/ea to mount them temporarily. Or... ugh go across cases with the cables? Just asking for trouble.

I dunno. Opinions on a better option? I suppose it's only an additional $60 for the trays, and I will be filling them with more drives eventually. Overall migration would be about the same minus the network connection. Snapshot, sync, offline, sync changes, export from new zvol, etc.

tl;dr: moving a 4x4 raid5 + 3x10raid5 to ~6x16 z2 with the least downtime and risk.

It's been a journey. Finally got enough parts in to complete the build, eventually decided on 8x16tb used segate exos X16 for $140 each. I'm waiting on trays so two are in the oddball 2.5 "but can be used as 3.5* some restrictions apply" brackets for now. Good enough for testing.

... and the motherboard has a bad DIMM channel. channel "H" is shot, no matter what DIMM I put in it it's not showing up. It does show up in the IPMI as a temperature measurement, so the i2c or whatever control channel works, but it can't train. Lovely.

For now I'm running on the 'wrong' set of sockets, AB/EF rather than the recommended CD/GH for a 4 dimm configuration, but I intended to run all 8 for either 192 or a full 256gb setup eventually. Not sure what to do. I've contacted tugm4770 but don't really want to wait a week and tear the whole thing apart & rebuild. Maybe he'll send me a partial refund and I'll just put that towards 4x 64gb rather than 8x32 and that'll be enough. I don't have a torque star for taking out the CPU and checking the socket for bent pins... unless I can find my threadripper tool and it's the same specs.

In the meantime, still progressing.

I picked up a QSFP+ "0.3m" DAC, not realizing they measured tip-to-tip and there's only about 4" of actual flexible cable on that. So now the new fileserver is going rear end-to-rear end with the network switch because gently caress buying another DAC for the transfer process.

... and it's so very miserably, unbearably slow. I was hoping running the arrays as NBD on the new server would help, but no, they're still limited by this ancient server to about 100mb per array. I gave up on anything fancy and I'm just letting rsync run for a few days because the limit is the pathetic drive bandwidth. No idea why it's so slow, I'm only getting 100MB/sec aggregate off each array. Shouldn't be PCIe limited, even 2.0 x8 is way faster than that. The drives should be pushing 100mb each, not in aggregate.

I'm doing 360gb/hr per array, so it should take about 3 days to copy the whole thing over, then an hour or two of downtime to re-sync any changed files. If the new drive trays show up first I'll move at least the larger array over to finish the copy.

while that's all going on I've got a fuckton of containers to spinup.

kliras
Mar 27, 2021
hdd's running in mid 40s to 50, probably because it's toasty out. i've got all kind of fans, but i guess a lian li o11d evo just doesn't have a lot of cooling for drivers in the drive cages (d15 cpu cooler with 2x140 bequiet silentwings in most places

i'll be switching to watercooling in a few days or weeks depending on whether i have the time and energy which should also help cool the vrm and motherboard, but is there anything worth being mindful of during all this?

i only noticed because i already had crystaldiskinfo running to monitor one of the drives that was erroring with some bad sectors, hopefully not for reasons due to heat

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

kliras posted:

hdd's running in mid 40s to 50, probably because it's toasty out. i've got all kind of fans, but i guess a lian li o11d evo just doesn't have a lot of cooling for drivers in the drive cages (d15 cpu cooler with 2x140 bequiet silentwings in most places

i'll be switching to watercooling in a few days or weeks depending on whether i have the time and energy which should also help cool the vrm and motherboard, but is there anything worth being mindful of during all this?

i only noticed because i already had crystaldiskinfo running to monitor one of the drives that was erroring with some bad sectors, hopefully not for reasons due to heat

Mid 40s seems like good temperatures to me!

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
yeah those temps aren't really scary tbh - you could check your drive datasheet(s) to verify their specs but I'd be surprised if that was cause for concern.

Anime Schoolgirl
Nov 28, 2002

Pablo Bluth posted:

Checking prices more carefully (current scan.co.uk prices), based on £/TB, I'll probably pick up another N300 series drive in 14 or 18TB. I have a pair in a desktop ZFS setup. The only thing more attractive is the Toshiba MG but I know the N300 are acceptable noise wise.



edit: apparently Toshiba do actually do a X300 Pro line that goes up to 22TB and is intended for desktops, but it's distribution seems bare minimum (and non-existence in Europe)
i know someone who bought a X300 18TB that turned out to be SMR.

Kivi
Aug 1, 2006
I care

Harik posted:

... and the motherboard has a bad DIMM channel. channel "H" is shot, no matter what DIMM I put in it it's not showing up. It does show up in the IPMI as a temperature measurement, so the i2c or whatever control channel works, but it can't train. Lovely.

For now I'm running on the 'wrong' set of sockets, AB/EF rather than the recommended CD/GH for a 4 dimm configuration, but I intended to run all 8 for either 192 or a full 256gb setup eventually. Not sure what to do. I've contacted tugm4770 but don't really want to wait a week and tear the whole thing apart & rebuild. Maybe he'll send me a partial refund and I'll just put that towards 4x 64gb rather than 8x32 and that'll be enough. I don't have a torque star for taking out the CPU and checking the socket for bent pins... unless I can find my threadripper tool and it's the same specs.

SP3? I'd just check the socket screws and the CPU for correct torque. On some boards the actual screws holding the socket on PCB have been loose, but I haven't had this happen to myself. I've used TR torque screw driver for my own EPYC setups and it's fine. You won't break anything if you tighten too much, there's supposedly a spring that limits you from overtorqueing but if vendors would claim that, someone would use air tools on it.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Kivi posted:

SP3? I'd just check the socket screws and the CPU for correct torque. On some boards the actual screws holding the socket on PCB have been loose, but I haven't had this happen to myself. I've used TR torque screw driver for my own EPYC setups and it's fine. You won't break anything if you tighten too much, there's supposedly a spring that limits you from overtorqueing but if vendors would claim that, someone would use air tools on it.

yeah, it's that or I left a standoff/there's a standoff bump I need to cover with kapton. I did do a count of standoffs and screws and they matched but I may have missed something. Thanks for letting me know it's worth it to dig out the threadripper box, it's pretty buried so I was :effort: for the moment.


I wasn't actually expecting this:


The array can't quite saturate 10gbe on writes, but it absolutely can on cached reads.



The "40gig" is going to be limited by the ancient xeon 3450 it's in. iperf only gets 25 between them, so not too surprising I'm only hitting 15.5 via nfs.

Yes. It's 5.5 times faster for the old xeon to read the new array via nfs than from its local disks. it's just that ludicrously obsolete now.

Took a few days but everything is first-round copied to the array. I'm setting up read-mostly exports to verify everybody can access it, then I think next step is to do the extended downtime to copy the latest live data over and down the OG arrays

Another bonus: I thought I had to setup absolutely every service and get everything working perfectly than do one big cutover.

Lol. No.

The new server is so much faster than the old one that if I unmount the data locally and mount it via nfs I get a massive performance boost.

So that's step 1.

Step 2 is to image the old boot filesystem, get it up and running in a VM and turnoff the old hardware entirely. Slightly tricky but doable I think. It's still insecure as hell with everything in a single image (no RAM or CPU to spare for containers before) but if it's in a VM it's got snapshots and the array isn't directly accessible, only the NFS export.

I'm thinking just skip directly to step2 tomorrow. That'll let me containerize and move things at my leisure.

Harik fucked around with this message at 14:12 on May 22, 2024

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Kivi posted:

SP3? I'd just check the socket screws and the CPU for correct torque. On some boards the actual screws holding the socket on PCB have been loose, but I haven't had this happen to myself. I've used TR torque screw driver for my own EPYC setups and it's fine. You won't break anything if you tighten too much, there's supposedly a spring that limits you from overtorqueing but if vendors would claim that, someone would use air tools on it.

That's exactly what it was, thanks Kivi. Dug out the threadripper swag lunchbox (how cool were those?) and used the torque wrench from there. I did, of course, open it up and lift the chip to inspect the pins underneath from a couple angles, but nobody looked damaged. It was probably my fault while installing the heatsink, the screws were too short to catch properly without really forcing them down to compress the springs, and it ended up springing up like a jack-in-the-box when I was tightening the opposite side. They had tested it with all 8 channels before shipping so it was either shipping or installation that moved it.

I'm a lot happier now. Project 'virtualize the old server' is progressing quickly. I've got an image of it booting now with the networks and drives disabled, going to test how it behaves on the network for a bit before swapping IPs, doing a final sync from the live box, and cutting everything over for good.

E: Why didn't anyone tell me you absolutely, positively, 100% must create a dataset first before copying 23 TB of data into your root, ugh. And there's no way to break a clone from its origin so I can never delete any of these files for the life of the array. gently caress. guess I'm doing local rsync into datasets and rm -r everything I copied first.

If I made the same mistake in btrfs you just make a writable snapshot and now that's your subvolume/dataset/whatever, and you can delete everything in the root dataset. I guess ZFS didn't have shared references until very recently so the snapshots are built on a completely different tech stack.

E2: if it WASN'T the root dataset I could clone it, promote the clone, then destroy the original. But I don't see any way to deal with an inversion where the root dataset depends on a snapshot?

Harik fucked around with this message at 03:23 on May 23, 2024

Animal
Apr 8, 2003

Hey guys glad I found this thread. I turned some "old" parts I had lying around into an unRAID server (NR200 case, 12900k, 32GB DDR5, 2x NVME SSD's, 1 SATA SSD, 2x 16TB HDD's 1x 4TB HDD) and somehow its all working great despite my almost complete ignorance of things Linux and networking. If I may, I'm gonna post my journey here because no one else in my life cares at all when I talk about it even though they reap the benefits.


I have a 2Gbe down/up FIOS connection. I travel for a living so I wanted to set up a headless server to host a WireGuard VPN, Plex server, game servers, adblocker for my household, photo backup, TimeMachine backup, etc. After some research I settled on unRAID. The lifetime license would be the biggest expense. I purchased the two Seagate 16TB Exos X16 drives refurbished from a server reseller on eBay. Since this is unRAID, I don't care that they are used drives. They were super cheap at $147 each. I'm using one as parity and the other one plus an old WD Red I had running Plex, that's 20TB available. I'm not even using 6TB so I'll be good to go for a long time. I had two laptop 512GB NVME's that I combined into a cache pool, and an old SATA 750GB drive that I'm using as a transit cache for downloads and SMB transfers to hold data until the nightly move to the array. The drives are attached to the case haphazardly using electrical tape and screws where available, and I ziplocked an old Noctua 120mm fan I had in my parts bin so that it provides airflow to all the drives. No video card needed thanks to QuickSync, so the PCI-E port is open. I would like to fill it with a 10GBE card that also contains SATA ports but I haven't been able to find that product. The 12900k has been doing a great job with GPU Plex transcoding and running two game servers, plus whatever I throw at it. I've seen it pegged of usage a few times which is surprising for a beefy processor. Usually when there are a few Plex viewers transcoding plus downloading movies and series, and players on the game servers.

Here's what I'm running

Plex + arr app suite. About 6 regular viewers and they can request whatever they want using Overseerr. I have subscriptions to two newsgroup providers on different backbones. Radarr + Sonarr using NZBGeek as the main indexer. I had a LOT of errors using the NZBGet downloader, most of the files would fail to repair. It took me over a week to diagnose and I thought I had bad RAM. But I switched to Sabnzbd and that is working wonderfully. Download speeds are up to 190mbps, its amazing to see a 2Gbe fiber connection get used to the max.

PalWorld and V-Rising servers: running perfectly without a hitch 24/7. The V-Rising server has been populated by random pubbies and usually has 3-5 players. I feel this strange sense of duty to make sure the server runs reliably for them and my friends.

ADGuard Home with my router configured with it as DNS. Seems to get a 97% block rate on a testing site so its good!

Wireguard VPN: I've been able to connect to it from about 10 different countries for the normal VPN duties. Also useful to manage the unRAID OS remotely. We'll see when I go to the PRC if it truly works.



All in all I'm super happy with unRAID and I feel this great sense of accomplishment that I got all this functionality out of mostly spare parts that I didn't have time to sell. I may upgrade to a more bespoke server case in the future like a JONSBO mITX case. But part of the fun is trying to get as much out of this spending as little money as possible. So the only thing I think I should spend money on is a good UPS.

Ok friends thanks for readying.

FAT32 SHAMER
Aug 16, 2012



Animal posted:

If I may, I'm gonna post my journey here because no one else in my life cares at all when I talk about it even though they reap the benefits.

Lord, do I feel this in my soul

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Animal posted:

no one else in my life cares at all when I talk about it even though they reap the benefits.

Too long for a thread title unfortunately

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Harik posted:

The "40gig" is going to be limited by the ancient xeon 3450 it's in. iperf only gets 25 between them, so not too surprising I'm only hitting 15.5 via nfs.
If you're using NFS, you should be able to use RDMA, too. There's probably tutorials on how to do that. I'm not using NFS, so I can't help. I do use RDMA on NVMe-oF with my ConnectX3 cards, tho, and it makes a difference. Less overhead dicking around with the TCP/IP stack lets your CPU shovel more over the wire.

Adbot
ADBOT LOVES YOU

Nam Taf
Jun 25, 2005

I am Fat Man, hear me roar!

Animal posted:

Wireguard VPN: I've been able to connect to it from about 10 different countries for the normal VPN duties. Also useful to manage the unRAID OS remotely. We'll see when I go to the PRC if it truly works.

It probably won't work in the mainland. It would when I was there in 2018/2019, but times have changed and Wireguard doesn't do any obfuscation so from what I've read DPI picks it up fairly quickly.

I'd recommend a reverse proxy. There's VPNs that offer specific solutions for the GFW, e.g. Mullivad. If you want to just throw money at the problem, this is probably the easiest solution.

Alternatively, if you're serious about going down the roll-your-own avenue, the current hotness looks to be the v2ray and Xray platforms, with modern protocols (e.g. vmess for v2ray-core) and obfs options (e.g. reality for xray). Apparently Shadowsocks with an appropriate obfs plugin (e.g. v2ray) still works too and it's what most retail VPNs use for the GFW, but it's less performant and has other limitations that newer-gen solutions resolve.

You can then throw your VPN through the reverse proxy. It'll be slower, but it'll get you in to janitor stuff.

1.1.1.1 + WARP may or may not work. I've seen conflicting reports.

e: I just realised how off-topic this is getting for the NAS/storage thread. If you want to discuss the roll-your-own options, we should probably go to the self-hosting thread.

Nam Taf fucked around with this message at 02:29 on May 24, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply