|
BlankSystemDaemon posted:Where are you going to page to, if not a swap device? Paging's most important purpose is to allow translation between virtual and physical addresses
|
# ¿ Jan 2, 2021 17:34 |
|
|
# ¿ May 16, 2024 15:57 |
|
BlankSystemDaemon posted:No, paging is the act of moving resident memory around - translation is handled by the page table/map (and hardware MMU, if available), which is another part of the VM subsystem that paging is also a part of. You know what, you're right. Paging does specifically refer to swapping. My bad . But even in the absence of a swap device the OS can map in parts of files to the fs cache, or to any VMA a process requests it to. My point was swap is good because it gives the OS the option to use physical memory in a more useful way, even when the working set does not exceed physical memory capacity.
|
# ¿ Jan 2, 2021 18:39 |
|
BlankSystemDaemon posted:I swear, I didn't know this was going to happen, but following the conversation VostokProgram and I had on paging, Mark Johnson - probably one of the smartest people in the FreeBSD project - wrote an article on how FreeBSD handles swap. Good article! Makes me wonder if TrueNAS enables swap.
|
# ¿ Jan 17, 2021 23:23 |
|
BlankSystemDaemon posted:So if it's bit-level parity, doesn't that mean that there's an implication that if you have, say, 4 drives of full of 10TB each, that your parity drive should be 40TB? I don't think so? It only needs to be as large as the largest drive. XOR bit i from every drive, write to bit i of the parity drive.
|
# ¿ Jan 18, 2021 13:03 |
|
Scuttle_SE posted:Not that much...maybe 150-200TB What hardware and software do you use
|
# ¿ Jun 16, 2021 19:46 |
|
Paul MaudDib posted:so it's to store the metadata and act kind of like a WAL/SLOG then? What's the difference between an allocation class device and a SLOG then? SLOG is only used for synchronous writes, e.g. through a file descriptor opened with O_SYNC. It's not a general purpose write cache. Non synchronous writes are cached in memory. IIRC special device isn't even really a cache for metadata, it's actually just the primary storage for it?
|
# ¿ Jul 23, 2021 23:35 |
|
BlankSystemDaemon posted:Writes aren't the big issue with DM-SMR drives, in so far as while they're worse than PMR drives they're not that much worse. Here's a question - let's say you do this zfs send/receive once, fully overwriting the drive. Then you go to do it again - won't the drive go crazy trying to rewrite all the shingles, because it still thinks that is useful information?
|
# ¿ Aug 9, 2021 17:51 |
|
BlankSystemDaemon posted:The entire point of WORM is to write once and read many. If you end up overwriting anything, you're not using it as WORM media. I mean there's WORM, and then there's "mostly" WORM. I can't imagine a use case where you would literally only write to a drive once in it's entire life.
|
# ¿ Aug 10, 2021 20:23 |
|
BlankSystemDaemon posted:My point is, that's all DM-SMR is good for - and given that tape comes out to about ~110USD for 12TB, the price difference isn't as big as you'd expect. If the drives had some factory reset command so you could tell it "everything on this is useless pretend you never wrote anything at all", they would be so so much more useful
|
# ¿ Aug 10, 2021 23:12 |
|
Why does a duplicate picture finder need to run in a docker....?
|
# ¿ Sep 15, 2021 00:16 |
|
Legends tell of a forbidden spell, "unlink(2)", but no one has used such power in millenia...
|
# ¿ Sep 18, 2021 19:22 |
|
BlankSystemDaemon posted:I'm not sure I've ever been responsible for the name of a thread before. AFAIK the ZIL always exists, its just that if no SLOG device is provided it is stored on the pool with everything else. Twerk from Home posted:If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that? Whether you want redundancy, or just a striped/spanned config, really just comes down to: how painful is restoring in your backup scheme? Redudant option: Since you want fast, and only "a couple terabytes", IMO a striped ZFS mirror with 4x 1 TB nvme drives is the way to go. Meaning, 2 mirror vdevs, each with 2 of the drives. And you've got no need for an SLOG then even if running a high speed database because your pool is already fast. Non-redundant option: Buy 2x 1 TB NVME drives, stripe them in ZFS, and just rely on restoring from backup. In this scenario your first tier "backup" can even just be 2x spinning rust drives in a mirror within the same machine, just on a second pool you never use directly. Then a cron job backs up the SSD pool to the HDD pool periodically using ZFS send/recv. Obviously in either scenario you have more backups like an offsite one or w/e depending on how valuable this data is.
|
# ¿ Sep 21, 2021 08:17 |
|
Z the IVth posted:Embarrassing confession time. its ok, windows just be like that sometimes
|
# ¿ Oct 4, 2021 04:10 |
|
politicorific posted:big post Various thoughts in no particular order: - im pretty sure you cant install normal 120mm fans in a 3U case, so youre going to be stuck with LOUD fans. a 4U is probably better from that perspective - water cooling unnecessary assuming you have enough airflow going through this case - UPS is a must have - transfers between VMs over the hypervisor's network bridge are pretty much at RAM speed - if your gaming machine is a VM on this server you'll have to look at GPU passthrough. doable but takes effort - you'll definitely want all the VMs to be installed on SSD-based storage (especially the VM you use as a PC), but hard drives will be fine for bulk data - RAID is not a backup, it exists purely to improve availability (uptime) so plan to have a proper backup for all this stuff - consider having your main PC as a dedicated desktop anyway so you can still check your email and pay your bills online whenever this homelab inevitably explodes, figuratively or literally - consider going AMD and getting a 5950X - personally I would rather do all this on TrueNAS than unraid but that will take away your GPU passthrough option. there's also Proxmox as an option - do NOT buy a single DIMM of RAM, buy a kit that populates your channels (which is 2 on most platforms)
|
# ¿ Oct 4, 2021 05:48 |
|
really weird rear end question but i figure this is also the unofficial data hoarder thread - anyone got tribute.avi?
|
# ¿ Oct 30, 2021 01:40 |
|
ive never used bsd but i also enjoy the bsd-posting
|
# ¿ Nov 18, 2021 02:28 |
|
BlankSystemDaemon posted:A new feature named vdev properties just landed in OpenZFS. Could you in theory write a script to rebalance a zfs pool by disabling allocations on the more full vdevs and then cp-ing a bunch of files around until the vdevs are mostly balanced?
|
# ¿ Dec 1, 2021 01:27 |
|
Re: lack of apps on truenas - can't you install whatever you want into a jail?
|
# ¿ Jan 9, 2022 23:26 |
|
5436 posted:That a good price? Maybe you can swing a discount if you buy all 17?
|
# ¿ Jan 13, 2022 03:31 |
|
Arivia posted:
It's extremely funny to me that there's a class of "tech" YouTubers, in comparison to whom Linus seems like a professional. Like what the heck do those other people do
|
# ¿ Jan 30, 2022 03:40 |
|
It's pretty normal to code a lookup table or a blob like that, I don't think it's a smoking gun or anything. You just generate the .c file with an external script. Also I'd rather have that in a .c file than a .h file, so you aren't relying on the linker to clean up multiple definitions.
|
# ¿ Feb 19, 2022 23:17 |
|
Yeah the difference is that if you were using ext4 you still would have ended up with hundreds of GB of bad data on disk but you wouldn't even have the console logs, just the random crash. Also, can't zfs be made to send an email or something when it detects a bad block? I'm pretty sure truenas has a feature like that
|
# ¿ Jun 14, 2022 21:45 |
|
We live in a strange time. You make more money by making products worse.
|
# ¿ Jul 14, 2022 11:40 |
|
Jellyfin has a bunch of apps too though? https://jellyfin.org/clients/ Its a Plex clone so I don't see why it would be any harder for non-technical people to use.
|
# ¿ Jul 18, 2022 20:23 |
|
If you use striped mirrors you can expand your pool easily and you can even use different size drives as long as each mirrored pair is matched. It's very convenient
|
# ¿ Aug 18, 2022 22:16 |
|
BlankSystemDaemon posted:It also means if two disks in a mirror fail, you lose your entire pool. You should have backups!
|
# ¿ Aug 18, 2022 22:34 |
|
BlankSystemDaemon posted:The ironic part is that the first time they tried to opensource Solaris was back in the 90s, but they couldn't because there was a shitload of drivers in Solaris written by second-party companies or subcontractors who they couldn't source release forms from. nit: Pandora is the one who opens the box, she isn't in it
|
# ¿ Aug 19, 2022 18:03 |
|
Thanks Ants posted:If you have guest Wi-Fi at work then get a cheapish tablet and use that to watch your Plex stuff. Doing VPNs on your work PC is just going to give them excuses if they decide they want you gone one day. I think it would get them immediately fired or at least in hot water. To the infosec people won't it look like an employee doing industrial espionage or something?
|
# ¿ Oct 8, 2022 17:05 |
|
This whole space is just begging for someone to create a convenient and easy to use wireguard endpoint in a box
|
# ¿ Nov 18, 2022 01:08 |
|
I'd probably stick all the online-facing services in a VM or something. I'm pretty sure Docker does not provide any security guarantees
|
# ¿ Dec 14, 2022 19:57 |
|
Wild EEPROM posted:I have truenas running on an old dell workstation with an E3 v3 xeon. it has 2 pools, each with 1 pair of hdds (2x 14tb mirror, 2x 8tb mirror) SSDs for a special vdev should help with the navigation part. I believe there's a setting to allow small files to live on it but by default it only holds metadata. Nothing will really help with opening a bunch of videos one by one though
|
# ¿ Jan 3, 2023 07:20 |
|
IOwnCalculus posted:Once you get to "servers sold to businesses", interoperability standards with things like power supplies go right back out the window. HPE doesn't care that you can't swap that PSU with a generic one, their entire concern for that server is that you either buy Official Spare HPE parts to repair it, or replace it with a More Better HPE Server when things do start breaking. I think system vendors would love to go back to the days where everybody had their own ISA and their own flavor of Unix, but that business model is not viable so instead we get almost-but-not-quite interchangeable commodity hardware
|
# ¿ Jan 19, 2023 20:55 |
|
BlankSystemDaemon posted:It's not exactly difficult to setup headscale which lets you self-host a beacon for tailscale. I feel like we're getting close to the point where it'll be possible to buy a cheap device that runs a wireguard endpoint and send it to your non-technical friends and family, tell them to plug it in, and it automatically bridges to your network and everything is super easy
|
# ¿ Feb 1, 2023 09:00 |
|
IOwnCalculus posted:It's this. If you're really that worried about the attack surface of Plex into your home network, set up Plex on its own VLAN, which is locked down to only access the internet and read-only access to your NAS. sure but your trust for nginx or wireguard should be about 100 times higher than plex
|
# ¿ Feb 4, 2023 04:04 |
|
Klyith posted:Yeah btrfs is gonna be vastly easier for someone doing a "linux learning project", since it's a first-class citizen on linux. Depending on what distro and what install options, it may well be the out-of-the-box default. As compared to btrfs, probably not. Compared to something like md it does
|
# ¿ Feb 10, 2023 20:30 |
|
Pablo Bluth posted:This ArsTechnica article was pretty scathing about btrfs Lol that's worse than I thought. That's such terrible UX
|
# ¿ Feb 10, 2023 23:26 |
|
With any sort of journaling filesystem a hard power off is unlikely to corrupt or lose your data. And with something like ZFS I think it would be fair to say it's impossible. Anything actually on the disk will be perfectly safe. However your OS will probably buffer a few seconds worth of data in system RAM, and some SSDs will do that again in their own RAM. So there's a risk of corrupting data there, if for example you had just edited and saved a file and the SSD was still writing the new blocks.
|
# ¿ Feb 24, 2023 17:21 |
|
The word is write-back cache
|
# ¿ Mar 21, 2023 17:50 |
|
Jim Silly-Balls posted:Its a 10GB SPF Direct cable Just to be extremely clear: gigabits or gigabytes?
|
# ¿ Mar 31, 2023 17:43 |
|
|
# ¿ May 16, 2024 15:57 |
|
I thought each vdev only gets the bandwidth of its slowest drive? I was going to recommend ZFS with striped mirrors. You'll only get half the space but if you really want to saturate the network it might be worth it
|
# ¿ Mar 31, 2023 19:44 |