|
im a turbo clown using btrfs on windows ama (no, seriously, i added a ssd to have some more space and winbtrfs automatically picked it up and formatted it and i couldnt find the magic incantations to reformat it in ntfs bc the format option in explorer doesnt work on btrfs partitions and it actually works anyway so im not bothering)
|
# ? Jan 10, 2020 22:28 |
|
|
# ? May 27, 2024 02:42 |
|
Also, if you think btrfs being unreliable is a "meme" why do you think Redhat finally gave up on it?
|
# ? Jan 10, 2020 22:30 |
|
mystes posted:Also, if you think btrfs being unreliable is a "meme" why do you think Redhat finally gave up on it? because they don't want to support it. it might be perfectly fine now, but they just don't want to deal with customer support for it xfs used to have major issues with unexpected shutdowns. seems fine now though and it scales much better than ext4 when it comes to multithreaded workloads
|
# ? Jan 10, 2020 22:39 |
|
mystes posted:The filesystem situation is pretty bad though. EXT4 works fine for what it is but I wish there was something built into linux that had more modern features and wasn't scary for various reasons. BTRFS seems like it will never be safe and the the licensing problem with ZFS can probably never be overcome. At this rate maybe reiserfs will somehow come back after reiser gets out on parole. there is zero commercial demand for something zfs-like the people who actually spend money on developers who work on the kernel (redhat, ibm, sgi) are happy with the current design, in which filesystems are strictly separated from volume management, encryption, et al. if their customers were demanding something vertically integrated, they would deliver it. but customers don't give a poo poo, so they haven't. hifi posted:I don't really think open sourcing zfs really did anything, the push for those modern features came from the business world, and those usually have support contracts, SLAs, and dollar signs attached to it anyways. zfs being released was more of a "hey look what we found on the sidewalk" type of thing from the hobbyist world. zfs was developed because sun and solaris were 15+ years behind their competitors hp shipped with veritas lvm+vxfs by default; aix had its own proprietary lvm and modern-ish filesystem; linux had lvm + xfs. all three were massively better than solaris with its gross and horrible logged ufs implementation from 1992 when sun decided to catch up they invested a shitload of money in an effort to be significantly better, so they could, idk, sell storage appliances or something as it turns out, nobody gives a poo poo and zfs was never a competitive advantage. it leveled the playing field but i doubt zfs closed many deals for sun/oracle sales people
|
# ? Jan 10, 2020 22:46 |
|
The_Franz posted:xfs used to have major issues with une[xpected shutdowns. seems fine now though and it scales much better than ext4 when it comes to multithreaded workloads the "issue" was that linux didn't support write barriers outside of trivial cases, and xfs depends on reliable write barriers to commit data to disk they fixed the underlying write barrier support in a shitload of places (disk drivers, lvm modules, etc) to widen the range of cases in which write barriers actually worked, and now it's a much more pleasant experience to run xfs on e.g. a laptop
|
# ? Jan 10, 2020 22:49 |
|
The only reason ext3 was stable with data by default in common use case was an accident
|
# ? Jan 10, 2020 22:52 |
|
Captain Foo posted:The only reason ext3 was stable with data by default in common use case was an accident if you use the default settings when you create the filesystem, ext3 wasn't reliable without battery backed hardware raid, because data journaling is turned off by default data journaling is turned off by default because it makes everything horrendously slow ext3 was never a particularly good filesystem
|
# ? Jan 10, 2020 22:56 |
|
Notorious b.s.d. posted:if you use the default settings when you create the filesystem, ext3 wasn't reliable without battery backed hardware raid, because data journaling is turned off by default And yet it was the standard for quite some time
|
# ? Jan 10, 2020 22:58 |
|
mystes posted:I tried using it on a laptop for a while. If the system wasn't shut down cleanly (i.e. I left the laptop suspended and forgot to charge it) there was like a 1 in 3 chance that the filesystem would become completely corrupted. After reinstalling the whole system several times I switched to ext4. never happened to me with btrfs on my laptops, and i've done a fair share of hard shutdowns. do you use it with lvm or something?
|
# ? Jan 10, 2020 23:03 |
|
Zlodo posted:im a turbo clown using btrfs on windows ama should go all out and use reactos, imo.
|
# ? Jan 10, 2020 23:05 |
|
Tankakern posted:never happened to me with btrfs on my laptops, and i've done a fair share of hard shutdowns. do you use it with lvm or something?
|
# ? Jan 10, 2020 23:05 |
|
well, a hot tip is don't mix btrfs and lvm, btrfs does everything lvm sets out to do, so it's redundant
|
# ? Jan 10, 2020 23:52 |
|
i'm deffo hoping btrfs has a similar redemption arc as xfs- those early days were trepidatious, sure, but at some point xfs became the gold standard (good to know the backstory behind deficiencies with extents handling) i'm glad this became a fileystems thread :3 data=journal or gtfo
|
# ? Jan 10, 2020 23:54 |
|
which fikesystem does Linus use. ext4 right? that’s the supported one. use that
|
# ? Jan 11, 2020 01:35 |
|
the two dominant filesystems are ext4 and xfs, yeah
|
# ? Jan 11, 2020 03:14 |
|
I had this weird problem with libvirt where when I edited the settings for a vm they would change back again if I rebooted the host or restarted the libvirt service. Apparently this was because I had two configuration files with the same vm name, although it's weird because it's not like it was copying the configuration from the other VM as far as I can tell, and there were no errors, and using virsh to edit the configuration worked fine and the changes persisted until I restarted the service. It would have been nice if it just said "error: you have two vms with the same name, dumbass."
|
# ? Jan 11, 2020 04:23 |
|
Suspicious Dish posted:last time i used it i closed my laptop lid and the filesystem got corrupted. maybe its better now lol
|
# ? Jan 11, 2020 07:59 |
|
Zlodo posted:im a turbo clown using btrfs on windows ama Jesus Christ
|
# ? Jan 11, 2020 08:00 |
|
i use btrfs in my synology NAS's cuz that's what they recommended i sure hope it doesnt screw up bad
|
# ? Jan 11, 2020 08:03 |
|
i use APFS on my laptop. its truly the worlds most advanced filesystem
|
# ? Jan 11, 2020 08:04 |
|
i dont know what filesystems are on my laptops right now OP i would have to check
|
# ? Jan 11, 2020 08:05 |
|
same as forums poster pram but imac
|
# ? Jan 11, 2020 08:22 |
|
mystes posted:I wonder if the situation wouldn't actually be better if ZFS had never been open sourced, since its existence seems to have unfortunately decreased interest in developing alternatives. if IBM openly commits Linux to kernel binary compatibility, supporting ZFS won’t be a big deal
|
# ? Jan 11, 2020 08:35 |
|
eschaton posted:if IBM openly commits Linux to kernel binary compatibility, supporting ZFS won’t be a big deal never change, man, you're a real institution
|
# ? Jan 11, 2020 08:49 |
|
Tankakern posted:mocking btrfs is like a meme. it works wonders, you should use it. used it on literally all of my systems for years, i've never had any issues. don't use it on your filesystems with tiny partitions, like 32GB or less
|
# ? Jan 11, 2020 08:52 |
|
psiox posted:ty for the positive perspective on btrfs! any caveats folks should know about, like never letting the fs fill up past a certain percentage, not using trim, etc etc? (i have no idea if those things are relevant) From what I remember, it is not recommended to use btrfs to store virtual machine images. Check the btrfs documenatation Just use xfs
|
# ? Jan 11, 2020 09:10 |
|
el dorito posted:From what I remember, it is not recommended to use btrfs to store virtual machine images. Check the btrfs documenatation writing to files isn’t well supported by btrfs. you can do it but it would rather you didn’t. that’s why vm images, which receive a lot of writes, shouldn’t be stored in btrfs. important data that should survive an unexpected reboot also shouldn’t be stored in btrfs. but if you have cold throwaway data and want to snapshot it, consider btrfs
|
# ? Jan 11, 2020 09:56 |
|
you can have the best of both worlds with btrfs. mark you folder with vm images, log files and databases with "chattr +C" (meaning nocow). that makes all new files in these folders marked with nocow, thus making your vms not go horribly slow. it will not convert old files, that must be done manually.
|
# ? Jan 11, 2020 10:30 |
|
It Just Works (randomly deleting your data)
|
# ? Jan 11, 2020 10:38 |
|
no it doesn't
|
# ? Jan 11, 2020 10:47 |
|
My btrfs experience a few years back was littered with multisecond disk i/o waits that were loads of fun. Must not have manually chattr'd several thousand files, my bad! From https://btrfs.wiki.kernel.org/index.php/Gotchas : "Using the ssd mount option with older kernels than 4.14 has a negative impact on the usability and lifetime of modern SSDs. With 4.14+ it is safe and recommended again to use the ssd mount option for non-rotational storage." Awesome - until the very end of 2017, btrfs "ssd" option was only safe for non-ssds. Typical.
|
# ? Jan 11, 2020 16:04 |
|
same story as xfs then, it was trash a couple years ago. you dont have to fiddle with mount options at all (and havent for a long time), ssd and such is autodetected
|
# ? Jan 11, 2020 16:07 |
|
Tankakern posted:same story as xfs then, it was trash a couple years ago. you dont have to fiddle with mount options at all (and havent for a long time), ssd and such is autodetected I'm pretty sure this thread has quotes from you claiming the exact same "oh sure, it sucked before but it's good now!" more than a couple years ago.
|
# ? Jan 11, 2020 16:11 |
|
Lol, you were talking it up in 2014. 2014!
|
# ? Jan 11, 2020 16:14 |
|
ext4 has never randomly poo poo itself or poo poo all over my files op. maybe xfs shouldn't have been ported to linux to begin with if the linux block device layer wasn't up to the task or whatever because guess what i really couldn't give a poo poo less whether xfs making GBS threads itself was the filesystem's fault or the block layer's fault. https://danluu.com/deconstruct-files/ quote:If we want to make sure that filessystems work, one of the most basic tests we could do is to inject errors are the layer below the filesystem to see if the filesystem handles them properly. For example, on a write, we could have the disk fail to write the data and return the appropriate error. If the filesystem drops this error or doesn't handle ths properly, that means we have data loss or data corruption. This is analogous to the kinds of distributed systems faults Kyle Kingsbury talked about in his distributed systems testing talk yesterday (although these kinds of errors are much more straightforward to test). i guess i'm maybe a little bit naive but it's kind of surprising at least from a 2020 perspective that a piece of code like a filesystem hasn't had all of its error paths rigorously exercised by an automated test suite. like somebody just wrote some an error recovery routine and said "ehh it'll probably work"
|
# ? Jan 11, 2020 16:26 |
|
use ext4 if it ain't broken, don't fix it
|
# ? Jan 11, 2020 16:42 |
|
Poopernickel posted:use ext4 oh wait, wait i have an anecdote here too Broken Machine posted:it scales better than ext4, has roughly equivalent performance, and is imo more robust. a few years ago I had an ext4 filesystem that blew up, wasn't able to fully repair it, reformatted it to xfs and have had zero issues since. Sapozhnik posted:ext4 has never randomly poo poo itself or poo poo all over my files op. maybe xfs shouldn't have been ported to linux to begin with if the linux block device layer wasn't up to the task or whatever because guess what i really couldn't give a poo poo less whether xfs making GBS threads itself was the filesystem's fault or the block layer's fault. is it fair to ask an initial deployment of software be perfect, especially when it's due to underlying os issues? by that measure, how robust was ext when it first rolled out?
|
# ? Jan 11, 2020 17:09 |
|
Broken Machine posted:is it fair to ask an initial deployment of software be perfect, especially when it's due to underlying os issues? yes. when "software" is a filesystem that is marked as stable and ready for production use. i just said i don't care whose fault it is when a filesystem shits itself and destroys my data. quote:by that measure, how robust was ext when it first rolled out? which ext? 2? 3? 4? 2 wasn't even journalled. 3 was supposedly ropey but it didn't give me any trouble, which is more than i can say for xfs. 4 was stable from day one.
|
# ? Jan 11, 2020 17:18 |
|
4 is functionally the same as 3 with big file support added on so if it weren’t stable it’d be a hell of a fuckup
|
# ? Jan 11, 2020 17:30 |
|
|
# ? May 27, 2024 02:42 |
|
personally I just use fat16 for everything since that was my nickname in my teens
|
# ? Jan 11, 2020 17:32 |