|
Brain Issues posted:I want a separate computer for the server, as I'd like the HTPC to run windows but I'm not fond of using windows for a file server (was going to use either Ubuntu or FreeNAS). That case is nice though. edit: My mistake, I misread this post. Yaoi Gagarin fucked around with this message at 10:57 on Jul 29, 2014 |
# ¿ Jul 29, 2014 02:49 |
|
|
# ¿ Apr 29, 2024 14:14 |
|
Brain Issues posted:This is what I decided I'm going to do first. Unfortunately the extra motherboard I have is ATX so I can't make it a small form factor PC, but I can deal with it considering it'd save me around 400 dollars if I don't buy a qnap/synology 4 bay. Just need to buy a cheap SSD now so I can swap the disk out of the HTPC. If it's ATX you need a mid-tower anyway, and you only want four bays, so maybe get the Nanoxia DS1? It'll be quiet, it's a hundred bucks, and it's still a good case if you end up not using it for NAS.
|
# ¿ Jul 29, 2014 11:00 |
|
Got a question about freenas. I've read a few articles that suggest that using striped mirrors is actually safer in zfs than raidz2, because the rebuild time is a lot shorter. Like if you lose a drive in raidz2 the whole array is going to be worked over but if you lose a drive in a striped mirror it's only the partner drive. Looking for the goon opinion on this. Would it make sense to start off with a 4tb mirror pool and slowly expand with more mirrors as needed, or should I save up for a full 6 drive raidz2?
|
# ¿ Apr 15, 2016 00:38 |
|
G-Prime posted:Yes, striped mirrors are inherently safer. You're also getting less than 50% of the total capacity of the drives you're running. At 6 drives (the minimum for striped mirrors, because you need two drives plus a third for parity, and then a second set to match), you're losing 4 drives to redundancy (your parity drive on one side of the mirror, plus the entire other side). You can tolerate the loss of one entire side of the mirror without issue, plus one drive of the other side. Whereas a z2 is losing 2 drives to it. Less fault tolerance, more capacity. It's a matter of priorities. And if you care about read IOPS, which would lean toward the striped mirrors as well, because they should be a fair bit faster. I think we have a different understanding of striped mirrors? Sounds like you're suggesting two raidz1 vdevs in a single pool? My plan was to start with a single vdev pool. The vdev would be two drives in mirror configuration. Then if I needed to add more space later I would just add more vdevs. That's theoretically less safe than a raidz2 vdev. But supposedly the idea is that with raidz2 the rebuild process could take out two more drives, since it takes so long. Wheras with mirrors the rebuild is a simple copy. Does that make sense? Am I overestimating how dangerous raidz2 rebuilds are? Also I don't care about IOPS at all, as long as the array can support watching videos. E: here is the article I'm basing this on. http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/ I'd appreciate insight re: whether that article is full of poo poo or not Yaoi Gagarin fucked around with this message at 20:01 on Apr 15, 2016 |
# ¿ Apr 15, 2016 19:59 |
|
g0del posted:I wouldn't say it's full of poo poo, but it's definitely not talking to you. His arguments basically come down to "You can totally afford to buy 2 hard drives for every one you use" and "You'll need mirrored drives to wring all the IOPS possible out of your array, especially when resilvering". The second definitely doesn't apply to you, and most home users can't afford to throw money at redundancy they way businesses do so the first probably doesn't apply either. Desuwa posted:To be fair OpenZFS rebuilding of raidz1-3 is slower than it should be, though it doesn't have anything to do with parity calculations. The algorithm they use for deciding the order to write blocks actually results in a large amount of small and effectively random reads and writes. In a pretty recent version of closed source ZFS they improved the algorithm significantly. Guess I'll go with a raidz2 then, and just save up until I can swing four or six drives at once.
|
# ¿ Apr 17, 2016 06:52 |
|
KOTEX GOD OF BLOOD posted:If I can pick between the drive that has a 3 in 4,000 chance of catching on fire and burning down the house, and the one that doesn't, I think I am going to go with Option 2. It's not 3/4,000. Think of it like this: if you bought a hard drive and it caught fire, you would almost certainly write a review about it. On the other hand, how many hard drives have you owned that you've been perfectly content with, but have never left a positive review for? The actual number of people who bought that drive from Amazon is going to be more than 3, but not by much. Conversely, the number of people who bought that drive from Amazon and didn't experience a fire is going to be much, much, much higher than 3,997.
|
# ¿ May 11, 2017 04:08 |
|
DrDork posted:And how many Amazon reviews did you leave? Probably zero. Maybe he didn't buy the drives from Amazon. And maybe that matters.
|
# ¿ May 11, 2017 22:11 |
|
Maybe you can make a symbolic link to a share instead of mapping the drive?
|
# ¿ Aug 25, 2017 02:37 |
|
Eletriarnation posted:It's hard to beat the Node 804's ten 3.5" bays at the same cost or size, let alone both. I'm using an old full tower case that I don't want to throw away or use anywhere else, but I bought a microATX board for it partly so I'd have the option to use an 804 if I ever want to add more drives than would fit right now. As someone using an 804 for their primary pc right now I find the more cubish shape really annoying compared to something more narrow. It takes up a greater footprint under my desk so I'd rather have a full tower
|
# ¿ Sep 3, 2018 19:46 |
|
forbidden dialectics posted:Lian-li A75. Kind of a disappointing case from Lian-li in terms of quality/finish, but it was quite cheap and is literally the only tower case I've found with 12 bays. I think one of the nzxt cases has 14. H400 I think
|
# ¿ Nov 1, 2018 02:31 |
|
Has anyone here tried stablebot DrivePool with an all sad pool? Thinking of pooling 3-4 ssds together in my next gaming PC so that I have a giant amount of fast storage for steam games and VMs. I just want to make sure drivepool doesn't add a ton of overhead or anything that would make this a bad idea
|
# ¿ Nov 4, 2018 09:11 |
|
What would be really cool is building/scavenging a 19U rack and turning the whole thing into a giant open air server. It'd be like having your own mainframe, except a million times stupider
|
# ¿ Mar 30, 2019 02:43 |
|
xzzy posted:But then you'd realize that you're wasting cooling by cooling down the hot exhaust air. What if you just duct yourself into an insulated bubble with AC, that'd be easier to cool
|
# ¿ Mar 30, 2019 03:06 |
|
To be clear: ZIL is not a general purpose write cache, it's used only for synchronous writes to the disk. Regular writes never hit the ZIL at all. You always have a ZIL but if there's no dedicated device it resides on the pool.
|
# ¿ Sep 10, 2019 22:43 |
|
How does the NAS get the key? Does it store it somewhere or do you have to type it in every time your computer makes a backup? Or when the NAS boots? Depending on exactly what you're worried about one of these options might be better from a security perspective
|
# ¿ Feb 7, 2020 02:27 |
|
Are there any specific brands of SATA cable I should trust more than others? For example crappy displayport cables sometimes lead to all sorts of mysterious erratic behaviors. I would like to avoid that problem when attaching a disk.
|
# ¿ Mar 10, 2020 01:16 |
|
Thermopyle posted:Excellent idea. Archive and properly back up the actual NZB files used to get all your stuff too. That will make recovery extra easy, and if you compress NZBs they should take barely any space at all
|
# ¿ Mar 16, 2020 23:12 |
|
If you want to slowly expand a zfs pool over time you can do it with striped mirrors (like raid10). Start with two drives, then add two more later, and so on. Downside is your capacity is only 50% at all times, but you get better performance and more flexibility in exchange. You can even use different sized drives as long as each mirrored pair is the same size to each other
|
# ¿ Mar 18, 2020 23:32 |
|
HalloKitty posted:Edit: Oh, and as an aside, I found this out in a reddit thread (I know) about WD Red drives between 2TB and 6TB being SMR. Avoid like the plague. Wow that's shady as gently caress. SMR is an affront to God. I guess they want people to pay up for more expensive drives to get CMR?
|
# ¿ Apr 15, 2020 10:03 |
|
On Linux if you really really really want lots of random bytes with no contention with other processes the way to do it is to use /dev/urandom to generate a seed value and feed that into your own prng. For a 1-liner you can use openssl to run aes on /dev/zero with the /dev/urandom bytes as the key
|
# ¿ Apr 17, 2020 22:37 |
|
https://arstechnica.com/information-technology/2020/04/seagate-says-network-attached-storage-and-smr-dont-mix/ Seagate at least promises that their ironwolf drives will stay CMR.
|
# ¿ Apr 21, 2020 22:34 |
|
IOwnCalculus posted:Ironwolves are the only drive I've ever had 100% failure rates with but that alone would be enough for me to try them again assuming cheaper sources dry up. I looked at the most recent backblaze data and whatever Seagate drives they use only have a slightly higher failure rate vs HGST and Toshiba. But I don't know if those are ironwolfs or not E: ooh, even better is that their datasheets explicitly call out CMR: https://www.seagate.com/internal-hard-drives/hdd/ironwolf/ Every single ironwolf from 16 TB down to 1 has it Yaoi Gagarin fucked around with this message at 23:20 on Apr 21, 2020 |
# ¿ Apr 21, 2020 23:15 |
|
Paul MaudDib posted:I don't get the angle of picking on WD because they happened to be the first ones discovered doing this, since all the other brands have now confessed they're doing it too. You're going to punish WD by... taking your business to another brand that did the exact same thing as them? Well tbf, like I posted earlier Seagate at least has CMR right on the datasheet for their Ironwolf drives. So there's different levels of poo poo going on.
|
# ¿ Apr 24, 2020 02:38 |
|
dutchbstrd posted:A drive failed in my nas so I swapped it out and rebuilt the array. Things are all good now. Should I do anything to the old drive before I throw it out? I think the data on it is effectively useless since it was just one of four disks in raid5? I'd at least write over it completely with zeroes or something before recycling it.
|
# ¿ Apr 28, 2020 00:08 |
|
IOwnCalculus posted:I always rip mine apart to get the magnets out of them because, hey, strong as gently caress magnets. Also pretty much guarantees nobody is going to read anything off of them. Well if you're going to do that then you may as well turn the platters into coasters too
|
# ¿ Apr 28, 2020 00:13 |
|
Maybe when bcachefs gets mainlined Linux will finally have a Good Native Filesystem™
|
# ¿ Apr 28, 2020 22:08 |
|
D. Ebdrup posted:
That's my hope too. Also I think that since bcache is a fairly popular program to begin with and bcachefs is just a posix API over that same storage layer that people won't approach it with the same apprehension as btrfs. It reminds me of a talk I saw once where the presenter had tried to build a filesystem on top of an RDBMS. So like tables for inodes, directories, etc. It was super slow, but it did work.
|
# ¿ Apr 28, 2020 22:34 |
|
That Works posted:I just shoot it with a rifle. If I was even more worried than that I'd shoot it with a rifle and throw it in a deep lake in the middle of the woods. Please do not toss electronics into lakes, they have toxic metals in them
|
# ¿ May 1, 2020 00:49 |
|
Why does does anyone even use hardware raid nowadays? Hasn't software raid been better for like a decade now?
|
# ¿ May 2, 2020 08:20 |
|
D. Ebdrup posted:Good news, everyone! Funny you post this now, I was literally just reading up on allocation classes a few minutes ago, after I heard that the next freenas (which will be named truenas core) would have some kind of "fusion pool" feature. And yeah it's really strange that this feature hasn't gotten more exposure. Given how many people on youtube and other places I see trying to add slog devices thinking that they are a general purpose write cache, I would expect people to jump all over this as a magic IOPS booster. Anyway, maybe having metadata on an nvme SSD will make `find` blazing fast? It would be worth it for that alone
|
# ¿ May 26, 2020 10:10 |
|
Henrik Zetterberg posted:https://arstechnica.com/gadgets/2020/05/western-digital-gets-sued-for-sneaking-smr-disks-into-its-nas-channel/ Good loving riddance. I hope they get a big penalty
|
# ¿ May 30, 2020 21:45 |
|
D. Ebdrup posted:Doing full submersion cooling isn't nearly as easy as it sounds, because you need to ensure that there is NO rubber (normally used to reduce vibrations) as for example demineralised water will break up rubber. You also need to ensure that all screws and other bits of useful pieces of metal like the heatsink you use won't leech ions into the water, slowly making it conductive. There's no way anyone would do submerged coooling with DI water, almost all metals will eventually dissolve ions into it. You have to use oil or some other nonpolar fluid.
|
# ¿ Jun 5, 2020 00:23 |
|
IOwnCalculus posted:I have seen spec sheets for some absolutely strange cooling systems, mostly to try and make "not a datacenter" space into a datacenter. At the place I last worked, the way we set up our servers and test hardware was basically: take a room, put in high current outlets, put in some ac, add racks/shelves. Literally there's a loving room where it has plate glass windows that don't insulate for poo poo and 3 portable AC units for cooling. In southern California. It hosts loving compile farm blade servers. That room is hot. Also, the servers are on UPS but the AC units are not, so whenever there's a power outage (and by God are there power outages) my coworkers would scramble to hard unplug all the blade servers before they burned the building down. Management knew this was a problem, but didn't care.
|
# ¿ Jun 5, 2020 04:43 |
|
D. Ebdrup posted:The 'special' vdev already does checksumming, compression, and caching on its own, which is why I was thinking it would be smart to not go through those codepaths twice. Does send | receive onto the same pool actually work? That seems pretty crazy
|
# ¿ Jun 15, 2020 20:07 |
|
D. Ebdrup posted:I'm having a hard time answering this, because it seems like a pretty fundamental misconception is going on here. Well what happens to the old blocks when the copies are written to the pool? Do I now have 2x everything in the pool?
|
# ¿ Jun 16, 2020 06:56 |
|
Not Wolverine posted:I have not yet read your link (I plan to) but my specific fear is that ZFS on Linux might be replaced or moved into the kernel in the future. Even if it's "better" I don't want things to break. It's an irrational fear, but that's the main reason I don't want to use Z on Linux right now. You know the on-disk format won't change right? If the module is replaced or put into the kernel tree the new one will still import your pool just fine.
|
# ¿ Dec 12, 2020 18:09 |
|
shortspecialbus posted:Ubiquiti has a lot going for it, but they make a lot of dumb decisions, have firmwares that break things get released (although they've been better), and their cameras are a joke. Plus the aforementioned BS with not updating the controller software to support OS's that aren't end-of-lifed. If I didn't already have 3 access points that worked really well I'd probably be looking at alternatives myself. What's wrong with their cameras?
|
# ¿ Dec 20, 2020 00:52 |
|
BlankSystemDaemon posted:Assuming FreeBSD 12 (or TrueNAS, I suppose, since that's also version 12), I believe there's the option of doing per-dataset encryption using AES-256-GCM with the OpenZFS port (which I believe is what TrueNAS 12 implements) - that would give you the equivalent of shared folder encryption on Synology which are encrypted when not mounted. Can you run Trueness with encrypted swap (or no swap at all) though? They specifically mentioned swap leak as what they're worried about E: oh wait, the concern is specifically being able to throw away drives. TrueNAS should be fine then because no system info is stored on your pool's disks. You ZFS encrypt your pool's root dataset and you're safe. You just can't easily toss the system drive because that's what might have swap on it, unless you enable FDE for that one (or two if mirrored) drive Yaoi Gagarin fucked around with this message at 17:18 on Jan 1, 2021 |
# ¿ Jan 1, 2021 17:13 |
|
BlankSystemDaemon posted:You can absolutely encrypt swap on FreeBSD, but I know basically nothing about TrueNAS. In the TrueNAS case I believe dataset encryption can be a reasonable alternative to FDE, specifically for pool disks. Since TrueNAS only uses those disks for zfs datasets, as long as you haven't manually put anything on a separate partition, everything on a given disk is encrypted. Someone who possess the drives can distinguish zero and nonzero blocks but thats all. E: it says as much in the docs: https://www.truenas.com/docs/hub/initial-setup/storage/encryption/ (scroll to the picture of the warning dialog) E2: swap is also good because it lets the OS trade rarely used memory pages for frequently used pages from the file cache. Though idk if freebsd does that Yaoi Gagarin fucked around with this message at 18:08 on Jan 1, 2021 |
# ¿ Jan 1, 2021 17:56 |
|
|
# ¿ Apr 29, 2024 14:14 |
|
BlankSystemDaemon posted:That's the point of any paging, and has been a thing since before any of the modern OS' or their ancestors existed (it was first implemented in 1963). No, that is not the point of paging, it's a particular optimization only possible with paging + swap. Without swap any page in the working set, no matter how stale, must be backed by physical memory. If I allocate a 1GB buffer, write a byte to each page to force it to be allocated, and then never touch that buffer again while my program does other stuff for an hour, that entire time I'm wasting physical memory capacity. With swap the OS has somewhere to stash these rarely used pages. Because of this, swap can provide a benefit even when the working set is smaller than physical memory capacity: rarely used pages pushed to disk leaves more space for the file cache.
|
# ¿ Jan 2, 2021 07:18 |