|
Pryor on Fire posted:I love the way you old fucks treat computing like a battle scene in Star Trek. Doesn't matter how mundane the activity or device or system, you find a way to turn something easy into a soap opera for absolutely no reason. Lock phasers on "ZFS", fire when ready.
|
# ? Feb 5, 2017 00:07 |
|
|
# ? Jun 3, 2024 22:48 |
|
Paul MaudDib posted:Lock phasers on "
|
# ? Feb 5, 2017 02:26 |
|
Pryor on Fire posted:I love the way you old fucks treat computing like a battle scene in Star Trek. Doesn't matter how mundane the activity or device or system, you find a way to turn something easy into a soap opera for absolutely no reason. As an aside... Isn't it also possible that some view the online interactions you consider a soap opera as just casual conversation? I'm always surprised when people consider some online discussion or the other to be an intense battle when I view it as just a super minor disagreement.
|
# ? Feb 5, 2017 02:49 |
|
Thermopyle posted:As an aside... I grew out of it as a preteen.
|
# ? Feb 5, 2017 03:06 |
|
NO, YOUR OPINION IS BAD AND YOU ARE BAD TOO I've run probably every variation on this except for jails - they look obnoxious enough that I've never tried. I'm definitely looking forward to Docker on FreeNAS 10.
|
# ? Feb 5, 2017 03:10 |
|
IOwnCalculus posted:NO, YOUR OPINION IS BAD AND YOU ARE BAD TOO Techgoons.txt
|
# ? Feb 5, 2017 03:45 |
|
Would something like a Synology DS216j be good enough for home NAS and making sure multiple OSs (Windows and Linux) can access the drive?
|
# ? Feb 5, 2017 08:27 |
The thing about containers is that there are a lot of different implementations, most of which don't share goals and features, but here's the short of it as I understand it: 1) Jails, as you might guess from the name of the original paper in which they were described, exist to isolate and confine root (and as a result, everything else). This means it's very good for security (the few methods of jailbreaking that are known generally involve sockets, modifying the shell executable that a bad superuser will jail_attach to, or using symlinks - and all these methods are easily addressable by not allowing socket access (which is the default), sshing into the jail instead of using jail_attach, and using zfs datasets). In fact, jails have a pretty good track record for security, with PHK having quibbed that he's very interested in hearing from people who manage to jailbreak through other means, because he hasn't heard about it yet and he doesn't believe they're completely secure. a) Zones are not as completely isolated as jails are (though they're very much inspired by them), and are often used to provide cross-platform compatibility through branded zones (implementing various system call stacks so that you get L(inu)x zones, cluster zones (which can be used by Manta and TRITON), and so forth. Something else zones did was first was multiple network stacks (via Crossbow, which in turn inspired VIMAGE on FreeBSD). α) Docker exists to allow developers to control the environment that their programs run in, with shared libraries and configuration being managed from the outside, making updating of them very simple, but also meaning that what security they offer is more of an afterthought than something it was made to do to begin with. This means they're excellent for cross-platform portability. Containers also have the excellent feature that they can be run on bare-metal instead of in a vm, which mean they run faster and don't segment resources (something I mentioned in my earlier post), but a certain subset of people have decided (probably through not knowing better, rather than as an act of defiance) that they must be run on vms with all the downsides that that implies. It's also worth mentioning that jails and zones predate docker (jails were made pre-2000, zones were made ~2004 if I recall correctly whereas docker is from 2013, after it was re-branded from a failed cloud/PAAS initiative), but before that not a lot of people had really considered using containers as a way to empower developers, it was more of a tool in a sysadmins toolbox. Both jails and zones can do what docker does (although it requires a bit of tooling to enable it), but docker did it first and did it so well that it changed the entire landscape. monster on a stick posted:Would something like a Synology DS216j be good enough for home NAS and making sure multiple OSs (Windows and Linux) can access the drive? BlankSystemDaemon fucked around with this message at 13:16 on Feb 5, 2017 |
|
# ? Feb 5, 2017 12:50 |
|
IOwnCalculus posted:NO, YOUR OPINION IS BAD AND YOU ARE BAD TOO All computing sucks, some less than others. For serious: One thing I've come to admire in computing is the Quality Without A Name aspect of stuff. It is a term from architecture that describes certain properties (qualities ) of things that you can't quite put your finger on but makes it wonderful. For me ZFS oozes this, everytime I use it I get the feeling that who made it took great pleasure in making it and it shines through in everything it does. Same with programming languages, F# has this as does Erlang. The first iPhone etc. etc. I just don't have the tolerance for poo poo software/hardware anymore it seems.
|
# ? Feb 5, 2017 14:07 |
|
In case of hardware, quality raises the cost, something a lot of people balk at. Honestly, every person I know, who complains about quirky behavior with their PCs/laptops, all bought budget stuff. The few persons that spend considerable money or did DIY with quality components, none of these have any issues at all, regardless of how good they're with computers or not.
|
# ? Feb 5, 2017 14:52 |
|
D. Ebdrup posted:Containers also have the excellent feature that they can be run on bare-metal instead of in a vm, which mean they run faster and don't segment resources (something I mentioned in my earlier post), but a certain subset of people have decided (probably through not knowing better, rather than as an act of defiance) that they must be run on vms with all the downsides that that implies. Docker came to prominence around the time that Vagrant showed up and addressed the very real issue of developers not wanting to be sysadmins despite complicated application stacks becoming the norm in industry. Developer-centric tools for systems management are rather rare if you look at the landscape of tools, but not as many developers in the 90s were concerned with bringing up really complicated stacks exactly as they are today.
|
# ? Feb 5, 2017 16:38 |
necrobobsledder posted:Most of the arguments I've seen for running containers inside VMs is based around greater isolation than what containers or jails offer or because the applications running in the containers are not quite 100% stateless and people want to be able to live migrate all the workloads to another physical location or want CPUs in lock-step kind of fault tolerance. In these situations typically cost and even latency issues are far secondary to reliability and almost invariably whoever is running the show has concepts of software and systems reliability predating the mid-90s with engineering practices matching (such as continuous delivery and deployment being worse for reliability despite the numerous successful companies showing the contrary). And speaking of Vagrant, there's currently work being done to enable Vagrant to use bhyve as a backend - the only known blocker to which, as far as I remember, is suspend/resume. BlankSystemDaemon fucked around with this message at 19:03 on Feb 5, 2017 |
|
# ? Feb 5, 2017 17:19 |
|
Combat Pretzel posted:In case of hardware, quality raises the cost, something a lot of people balk at. Honestly, every person I know, who complains about quirky behavior with their PCs/laptops, all bought budget stuff. The few persons that spend considerable money or did DIY with quality components, none of these have any issues at all, regardless of how good they're with computers or not. One thing that made me aware of this again is that on my machine at home hibernation works and on the crappy HP machine at work it just craps out all the time. I have a quality Intel board and everything works as it should, you would think the same from a HP machine but you would be wrong. Don't get me started on the cheap plastic power button that stays depressed when you press it to turn the machine on because it is a cheap piece of poo poo and it turning the machine off again. Way to ruin your brand HP.
|
# ? Feb 5, 2017 17:31 |
|
Mr Shiny Pants posted:All computing sucks, some less than others. You might say your OS is a piece of poo poo. I will gladly take the downside of ZFS (poor options for expansion, in a consumer / home user context) in exchange for the data protection. The only time I've lost significant data is when I hosed up trying to do a ZFS send/receive and somehow managed to wipe the array of any visible data in the process. Definitely going to practice that a few times with some dummy arrays before I try that again.
|
# ? Feb 5, 2017 17:49 |
Thermopyle posted:As an aside... I don't really see any battle going on, that's not the angle I was going for. I was just more commenting on how neat all this discussion is from the perspective of someone who hasn't had time to dive into filesystems or building packages on BSD in like 10 years. It's a tremendous waste of time for me nowadays, but I'm glad someone out there is still doing it.
|
|
# ? Feb 5, 2017 18:32 |
|
IOwnCalculus posted:You might say your OS is a piece of poo poo. It's a tradeoff sure but one the designers at least thought about and made clear.
|
# ? Feb 5, 2017 20:13 |
It's not even that pool expansion through adding disks to existing vdevs (other than mirrors) is impossible, it's just that doing so requires block pointer rewrite, which - as I understand it - is quite difficult (though also enables other features like pool defragmentation), as evident by some of Matt Ahrens off-the-cuff remarks on it, but there isn't exactly a huge pool of software developers talented enough to work on it; one can hope that this will change now that Solaris is basically on life-support.
|
|
# ? Feb 5, 2017 23:34 |
|
Block pointer rewrite would really help in the production world, too. I've seen this a few times - a vdev gets full while also sustaining a heavy workload. Adding another vdev gives you more room, but doesn't do much to relieve the workload on the existing disks. Unrelated, I've found something... odd? going on with the boot SSD on my Linux box. The boot SSD dumped a bunch of "WRITE FPDMA QUEUED" errors to dmesg, which I'd normally take as a sign of a trashed drive. However, it did this about twelve hours after boot, and this was nearly two weeks ago before I even noticed the errors in dmesg. The system still seems perfectly stable with writes and reads on that drive. I should probably replace the drive anyway since it's a five-year-old Sandisk 120GB that has apparently sustained over 5 TB written to it in its lifetime, if the SMART data is accurate.
|
# ? Feb 6, 2017 03:59 |
|
I seem to recall getting that message periodically with a lovely SATA cable, might be worth swapping that first just to see where you stand.
|
# ? Feb 6, 2017 05:04 |
|
I want to replace a 2TB drive in my sinology NAS (2bay, JBOD) with a 3TB drive. Can I just connect both drives to my PC, clone the data, plug in the new one and tell the NAS to use the additional capacity?
|
# ? Feb 6, 2017 10:44 |
|
D. Ebdrup posted:It's not even that pool expansion through adding disks to existing vdevs (other than mirrors) is impossible, it's just that doing so requires block pointer rewrite, which - as I understand it - is quite difficult (though also enables other features like pool defragmentation), as evident by some of Matt Ahrens off-the-cuff remarks on it, but there isn't exactly a huge pool of software developers talented enough to work on it; one can hope that this will change now that Solaris is basically on life-support. So I've read this and I don't get some of the problems mentioned. He's talking about the space accounting of a block but that does not change when you move a block from one vdev to another right? The only thing that I think that is really tricky is doing it one a live system and not running out of space because of the translation table you need to keep. Maybe you could even use the dedupe logic for it? Since it traverses the whole system and rearranges pointers already. So you would offline the whole shebang and run it offline maybe and have the translation map on a secondary scratch drive. I understand it is probably very difficult but some things seem weird. Mr Shiny Pants fucked around with this message at 13:26 on Feb 6, 2017 |
# ? Feb 6, 2017 13:20 |
|
The reason it's a problem is that ZFS was designed for business use, and no business can afford to offline their array for days to add more capacity with BPR. Hence his statement about changing your pants while running. The scenario that they care about is making it so you can do a live resize, and that's an incredibly difficult use case.
|
# ? Feb 6, 2017 13:52 |
|
G-Prime posted:The reason it's a problem is that ZFS was designed for business use, and no business can afford to offline their array for days to add more capacity with BPR. Hence his statement about changing your pants while running. The scenario that they care about is making it so you can do a live resize, and that's an incredibly difficult use case. True, but having the option, albeit offline, is better than no option I would venture.
|
# ? Feb 6, 2017 16:48 |
|
I think the biggest problems is that blocks are being pointed to from a lot of different places, and to change them all at once atomically is a big pain in the rear end doing it live. Also, dedupe is loving terrible and memory intensive.
|
# ? Feb 6, 2017 17:21 |
|
Combat Pretzel posted:I think the biggest problems is that blocks are being pointed to from a lot of different places, and to change them all at once atomically is a big pain in the rear end doing it live. The way blocks, slabs, metaslabs, and the various pointers within each one are constructed makes it a monumental pain in the dick to add BPR after the fact. Each of the various layers needs to be aware of what's going on, a ton of poo poo would have to be added to every file transaction, and making everything kosher would be very challenging. As an offline only tool to unfuck datasets or to do things like transition raid levels I could see, but most ZFS use cases are people that can afford to throw an extra shelf at the problem and do a ZFS send/receive and blow out the datastore if things go wonky. Yes, dedupe is literally terrible, tried it once on my machine, and ended up having to copy all my data to a 4tb external and delete the entire pool to get the cancer to go away.
|
# ? Feb 6, 2017 19:26 |
|
ZFS' combination of prefetch, that uses linear and stride read detection to figure out how much to prefetch, combined with escalator sorting, should mitigate the issue of fragmentation for a lot of workloads. For those that do truly random IO, whether the data is fragmented or not, doesn't matter two shits. With OpenSolaris' default ZFS configuration, watching movies for instance, the filesystem started to prefetch in 200MB blocks pretty quickly, because it spotted continuous linear reading.
|
# ? Feb 6, 2017 19:32 |
|
Combat Pretzel posted:I think the biggest problems is that blocks are being pointed to from a lot of different places, and to change them all at once atomically is a big pain in the rear end doing it live. Well my remark was more in that dedupe already works on all the blocks and has to know where every block is otherwise it could never detect duplicate blocks and keep the filesystems kosher. Maybe they could leverage the same subsystem to figure out the necessary pointers when stuffing them into a position. Netapp has dedupe scheduled, maybe something like a scheduled job that does all this when the filesystem is idle.
|
# ? Feb 6, 2017 20:00 |
|
G-Prime posted:I seem to recall getting that message periodically with a lovely SATA cable, might be worth swapping that first just to see where you stand. I'm hoping this is the case, even though new SSDs are loving cheap. I just can't fathom why it would only go flaky after months of undisturbed operation.
|
# ? Feb 6, 2017 21:03 |
Methylethylaldehyde posted:Yes, dedupe is literally terrible, tried it once on my machine, and ended up having to copy all my data to a 4tb external and delete the entire pool to get the cancer to go away. Dedup is quite simple, it's a simple table consisting entries for each block, taking up just around 340 bytes each, which as you can imagine quickly makes that table very large (in fact, the recommendation of 5GB / ram per 1TB of diskspace for dedup is not really excessive, it's pretty conservative). On Solaris, where ZFS was made, this isn't really an issue. OpenZFS, meanwhile, are trying to address it by adding a device type to store the dedup table on, so instead of storing it on memory and then moving it to disk when the memory eventually runs out on x86 (which is what I'm positive happened in your case, because it happens for everyone unless you're running Xeon E7s), you can use an SSD. ZFS is an utterly game-changing filesystem, but it has its downsides just like anything else.
|
|
# ? Feb 6, 2017 22:13 |
Combat Pretzel posted:the filesystem started to prefetch in 200MB blocks pretty quickly, because it spotted continuous linear reading. Just out of curiosity, how do you see it's doing this? Seems like a useful thing to know how to do!
|
|
# ? Feb 6, 2017 23:52 |
|
This probably isn't the right thread for it so if people want to point me elsewhere do so... my NAS just lost a SSD that wasn't mirrored (Or really used for anything, I was going to use it for L2ARC before realizing how stupid that was.) At the same time my Windows 10 desktop lost it's non-system rotational drive. As I replaced it with a 2TB drive and restored from crashplan, I put the two together and wondered how annoying it'd be to lose the System SSD. Is there an easy way on Windows 10 to mirror an existing SSD to a partition of a much larger rotational so I could just continue to boot if the SSD died?
|
# ? Feb 7, 2017 00:40 |
|
fletcher posted:Just out of curiosity, how do you see it's doing this? Seems like a useful thing to know how to do! Now I just run ZFS on my NAS. Haven't really checked what it does overthere. Prefetching is probably limited, similar to write buffering. OpenSolaris does buffer writes up to 30 second, if memory allows, FreeNAS had it configured to 2 seconds.
|
# ? Feb 7, 2017 03:40 |
|
Heads up: Intel Atoms seem to be dying because of a chip clock failure. Cisco products are affected and they also mention Synology. https://www.theregister.co.uk/2017/02/06/cisco_intel_decline_to_link_product_warning_to_faulty_chip/
|
# ? Feb 7, 2017 08:14 |
|
Mr Shiny Pants posted:Heads up: Intel Atoms seem to be dying because of a chip clock failure. Cisco products are affected and they also mention Synology. drat. I planned to refresh my NAS late next year (built it about this time in 2015) but maybe I should consider making that happen this year if the prices are good. I was hoping another two generations would bring SSDs down into the range where I could use them to make a completely silent NAS. I suppose the c2550 is still on Amazon with prime shipping so I can hold off panicking.
|
# ? Feb 7, 2017 09:04 |
|
Mr Shiny Pants posted:Heads up: Intel Atoms seem to be dying because of a chip clock failure. Cisco products are affected and they also mention Synology. Yeah, saw this mentioned in one of the other megathreads, with a note that it's probably gonna take a silicon change to fix them. There's some mention of a "board level workaround," but no definitive note on whether a software/firmware downloadable fix is possible. No word yet on what the crap Intel plans to do about it, especially since those Atom chips are in a looooot of random products. Basically, if you have a B0 stepping C20xx Atom, best to assume it'll die at roughly 18 months of use.
|
# ? Feb 7, 2017 13:59 |
|
Desuwa posted:was hoping another two generations would bring SSDs down into the range where I could use them to make a completely silent NAS. Unless you have a fairly small NAS (at which point maybe you could get away with a simple mirroring arrangement on another computer you've got), or you really don't care whatsoever about prices, you're going to have to wait a good bit longer than two generations: In 2013 you could get a 250GB SSD for ~$160, or $0.64/GB. In 2017 you can get a 960GB SSD for ~$220, or $0.23/GB. Assuming the same linear progress, in 2021 you should be able to get a 4TB SSD for $0.06/GB, or $230. In the meantime you could probably get a 10TB HDD in 2021 for $100. Of course, you can always just say "gently caress you, I'm rich" and buy the 4TB SSD's already available...for a cool $1500.
|
# ? Feb 7, 2017 14:16 |
|
DrDork posted:Yeah, saw this mentioned in one of the other megathreads, with a note that it's probably gonna take a silicon change to fix them. There's some mention of a "board level workaround," but no definitive note on whether a software/firmware downloadable fix is possible. No word yet on what the crap Intel plans to do about it, especially since those Atom chips are in a looooot of random products. Jesus! What the gently caress Intel. Those stupid things are for usage in server poo poo. I guess i'm lucky I never used Atom builds for NAS stuff. Never trusted it to perform good enough.
|
# ? Feb 7, 2017 15:37 |
|
redeyes posted:Jesus! What the gently caress Intel. Those stupid things are for usage in server poo poo. I guess i'm lucky I never used Atom builds for NAS stuff. Never trusted it to perform good enough. Best hope your server has at least a two year warranty!
|
# ? Feb 7, 2017 15:46 |
|
DrDork posted:Unless you have a fairly small NAS (at which point maybe you could get away with a simple mirroring arrangement on another computer you've got), or you really don't care whatsoever about prices, you're going to have to wait a good bit longer than two generations: I don't think that we can expect the same linear progress anymore. Future HDD capacity improvements look extremely expensive and complex enough that they won't decrease in price dramatically, it might be more like we see 40TB Enterprise HDDs for $600 using HAMR. In the consumer space, it looks like there's a crossover coming where SSDs are going to be cheaper per GB than HDDs.
|
# ? Feb 7, 2017 16:27 |
|
|
# ? Jun 3, 2024 22:48 |
|
I don' think we'll see linear progress, either, but in the other direction: I doubt SSD price:GB will continue to drop at the same rate it has, but rather will slow as we start needing more esoteric layering strategies to push density up. Or maybe we'll go back to 3.5" SSDs! Super-massive Enterprise HDDs will probably happen, but 40TB for $600 is probably generous: with current drives at 10TB for $500ish, that's a poo poo-ton of improvement for "free." Maybe in 4-5 years. Maybe. I also doubt we'll see a 40TB SSD for $600 anytime soon: Seagate's 16TB SSD launched last year with prices reported around $6,000, and Samsung's 15TB SSD was $10,000. Reports on Seagate's new 60TB monster suggest a $30-40,000 price tag. Even if you assume prices would drop by 2/3 shifting from enterprise to consumer (which is a generous drop), you're still talking $10,000 for 60TB. Prices will come down, to be sure, but that's a long way to go before you hit sub-$1k. Like 5+ years. HDD's will likely eventually stonewall at some point, but that point seems to be a decent bit away. 10TB consumer-grade drives are already on the shelves, and larger ones are being aimed at for the near future. Pricing is pretty aggressive on them, too, with those 10TB drives going for $0.05/GB or less, which is still 5-6x cheaper than large-format SSDs (eg, a 1TB 850 Evo is ~$325, or $0.325/GB). So, yeah, some day SSDs may be able to compete on a price:GB basis, but it ain't gonna be any time soon unless you're talking very small (<1TB) drives, where they're already competitive. tl;dr SSDs will remain substantially more expensive per GB for at least the next 3-4 years. e; On the other hand, we're already past the point where "average users" can get a SSD at a price competitive with high-performance HDDs that is sufficiently large for normal use. I.e., a 500GB/1TB SSD is enough for probably 90% of users out there, relegating HDD's to mass-storage for movie pirates and NAS installations. DrDork fucked around with this message at 16:52 on Feb 7, 2017 |
# ? Feb 7, 2017 16:48 |