|
Ashex posted:My media server has been running Archlinux for the last 6 years and has worked really for me, unfortunately I made the fatal error of going with f2fs when I upgraded to an SSD and after a power outage. I've now got some intense disk corruption I'm recovering from so it's looking like I'll need to do a fresh install leaving me with two choices: I've been really happy with SUSE Tumbleweed, but I have no idea how the SUSE OBS stacks up against ABS, because I've never used it -- generally someone else has already built whatever I want on OBS and I just need to add the repo.
|
# ? Nov 21, 2014 16:08 |
|
|
# ? May 10, 2024 21:27 |
|
Ashex posted:My media server has been running Archlinux for the last 6 years and has worked really for me, unfortunately I made the fatal error of going with f2fs when I upgraded to an SSD and after a power outage. I've now got some intense disk corruption I'm recovering from so it's looking like I'll need to do a fresh install leaving me with two choices: ToxicFrog basically said it all. openSUSE Tumbleweed (née Factory) is a rolling release distribution, and the OBS is pretty fantastic if you are already familiar with .SPEC files. It does automatic (dependency-based) rebuilds etc. The only caveat is that you are only allowed to build open/free software on the OBS, i.e. no proprietary licenses are allowed as per their legal team. I've had very few problems over the years with both OBS and openSUSE itself, and they support a wide range of desktop environments etc.
|
# ? Nov 21, 2014 16:41 |
|
Hollow Talk posted:ToxicFrog basically said it all. openSUSE Tumbleweed (née Factory) is a rolling release distribution, and the OBS is pretty fantastic if you are already familiar with .SPEC files. It does automatic (dependency-based) rebuilds etc. The only caveat is that you are only allowed to build open/free software on the OBS, i.e. no proprietary licenses are allowed as per their legal team. I kinda say just use whatever distro you like the best. If that's arch, use arch. If you wanna experiment, SuSE is great.
|
# ? Nov 21, 2014 17:42 |
|
evol262 posted:Fedora has Copr/Koji as an analogue to this, and lpf for building stuff that doesn't do redistribution. Interesting, I was leaning towards Fedora but I'm not sure there is an upgrade path between major releases (main reason I love rolling release). I know with CentOS the general recommendation is to backup and reinstall.
|
# ? Nov 21, 2014 18:09 |
|
Ashex posted:Interesting, I was leaning towards Fedora but I'm not sure there is an upgrade path between major releases (main reason I love rolling release). I know with CentOS the general recommendation is to backup and reinstall. The centos/RHEL recommendation is because they're branched off, so 6->7 is basically F12->F19. Fedora has an upgrade tool (fedup) which is necessary for big changes, like the move from /bin to /usr/bin. Other than that, you can mostly switch repos, import the new GPG key, and upgrade. Fedora is supported for 2 releases (~1 yr), basically, and upgrading from ${release-2} is supported through official tooling. Anything more than that is up to you. Or you can just run rawhide and be on a rolling release. Not that I'd necessarily recommend that. Incidentally, upgrading between major CentOS/RHEL releases was always technically possible, just fiddly, and generally involved hacking RPM up to read the new changes (xz, filedigests, whatever) by pulling in Fedora packages from when it happened, then adding a new glibc, librpm, rpm-python, and enough other base tools for rpm/yum to work. There's now an upgrade tool that's officially supported. Not that it'll always work, but at least it's something, and companies don't have to hire me to write them upgrade tools anymore.
|
# ? Nov 21, 2014 18:25 |
|
We got a FOG server installation at work running on Ubuntu 12.04 that was running out of space (FOG is an OSS computer management system, allowing capture/deployment of OS images over a network via PXE boot. It's good stuff). The important bit from here is that fog needs to mount the /images/ directory over NFS during the imaging process. We figured that we would solve the imaging issue by moving all operating system images to a NAS, and then mounting that location via NSF on startup onto the original location where the images where stored. Effectively, the server would mount an NFS share from IP_A and then export it over a different IP_B. However, any attempts to access this chain-NFSed share fail with a permission denied error. Is this kind of chain-mounting allowed in linux or do we have to fix this in a different way?
|
# ? Nov 22, 2014 01:31 |
|
NFS requires that the underlying filesystem satisfy some requirements that it itself does not satisfy.
|
# ? Nov 22, 2014 01:41 |
|
pseudorandom name posted:NFS requires that the underlying filesystem satisfy some requirements that it itself does not satisfy. e: Thanks for the fast reply! Would there be any other way to work around this? I believe the NAS also supports CIFS export (which I don't think can be daisy-chained over NFS either) or probably SMB...
|
# ? Nov 22, 2014 01:51 |
|
evol262 posted:The centos/RHEL recommendation is because they're branched off, so 6->7 is basically F12->F19. Fedora has an upgrade tool (fedup) which is necessary for big changes, like the move from /bin to /usr/bin. Other than that, you can mostly switch repos, import the new GPG key, and upgrade. Fedora is supported for 2 releases (~1 yr), basically, and upgrading from ${release-2} is supported through official tooling. Anything more than that is up to you. Ah that's good to hear, I think I'll go with Fedora then. I'm tempted to get the beta since final release in only a couple weeks away but perhaps this will be a good way to test the upgrade path.
|
# ? Nov 22, 2014 12:17 |
|
evol262 posted:There's now an upgrade tool that's officially supported. Not that it'll always work, but at least it's something, and companies don't have to hire me to write them upgrade tools anymore. My experience is that the upgrade tool works. I'm not sure I'd use it for anything super mega critical, but I've used it to upgrade a few dev/test servers from CentOS 6 to 7 and it's been relatively pain-free. It chucks out huge amounts of reporting before you do any updates, so you should also know what won't work before you get started. GUI being one of the messier things, but for servers that's possibly/probably not an issue.
|
# ? Nov 22, 2014 13:13 |
|
Ashex posted:Ah that's good to hear, I think I'll go with Fedora then. I'm tempted to get the beta since final release in only a couple weeks away but perhaps this will be a good way to test the upgrade path. I'd just start with 21, honestly. It's stable, and it's going through a reorg into workstation/server/etc. Which can potentially be annoying during upgrades, but it's never happened before and will never happen again.
|
# ? Nov 22, 2014 16:30 |
|
evol262 posted:I'd just start with 21, honestly. It's stable, and it's going through a reorg into workstation/server/etc. Which can potentially be annoying during upgrades, but it's never happened before and will never happen again. Too late for that! Already installed 20 and am getting it setup, I realized while migrating configs and stuff over that I should use docker containers for the unpackaged things like commafeed/hastebin.
|
# ? Nov 22, 2014 16:54 |
|
Am I alright using btrfs on an SSD?
|
# ? Nov 25, 2014 05:55 |
|
I am not a book posted:Am I alright using btrfs on an SSD? Do not use btrfs.
|
# ? Nov 25, 2014 06:00 |
|
Suspicious Dish posted:Do not use btrfs. I'm not going to use ext*, so unless you're suggesting reiser4 you aren't really helping. edit to be less sarcastic: I'm explicitly interested in what's up and coming and I've ran it for years on spinning disks. I want to know whether it does anything that might damage or impair the performance of an SSD.
|
# ? Nov 25, 2014 06:21 |
|
I am not a book posted:I'm not going to use ext*, so unless you're suggesting reiser4 you aren't really helping. I haven't seen filesystem zealotry in years. ext* is probably the safest, most widely supported fs you can use. You're free to make other choices, and btrfs can be a reasonable one if you don't want to use lvm. It's safe for ssds. But there's not a good reason to be vehemently against ext4
|
# ? Nov 25, 2014 06:29 |
|
Feel free to use btrfs and watch it lose all your data like it did to me three times then. You're welcome to have strong opinions and make bad life choices as a result. There's also XFS which is now the default in RHEL7.
|
# ? Nov 25, 2014 06:38 |
|
Suspicious Dish posted:Feel free to use btrfs and watch it lose all your data like it did to me three times then. You're welcome to have strong opinions and make bad life choices as a result. Yeah, intentionally running unstable software definitely means that I won't do regular backups.
|
# ? Nov 25, 2014 06:45 |
|
I am not a book posted:Yeah, intentionally running unstable software definitely means that I won't do regular backups. Make sure you don't cut yourself on that edge. Great that you have backups. You wouldn't need them if you ran ext4+lvm (xfs is still a little picky). But so you know, "I know what I'm doing so I'll just be defensive and sarcastic when somebody questions my choices" and "I ask for advice on an internet comedy forum" makes a fun dichotomy.
|
# ? Nov 25, 2014 07:12 |
|
evol262 posted:Great that you have backups. You wouldn't need them if you ran ext4+lvm (xfs is still a little picky).
|
# ? Nov 25, 2014 07:53 |
|
I am not a book posted:Yeah, intentionally running unstable software definitely means that I won't do regular backups. 1. What will you store the backups on if not btrfs? 2. Backups are nice but after your btrfs hosed your OS and you have to reinstall, it gets old quick.
|
# ? Nov 25, 2014 07:59 |
|
Let's put it another way: Is there even a single compelling reason for not using ext4 on a Linux system?
|
# ? Nov 25, 2014 09:34 |
|
KozmoNaut posted:Let's put it another way: Is there even a single compelling reason for not using ext4 on a Linux system? btrfs' snapshots and subvolumes are pretty drat sexy. I use snapshots for quick and frequent backups, btrfs-send them to another box, and have regular old backups on ext4 as well.
|
# ? Nov 25, 2014 10:11 |
|
KozmoNaut posted:Let's put it another way: Is there even a single compelling reason for not using ext4 on a Linux system? Some things work better on xfs e.g. Gluster.
|
# ? Nov 25, 2014 13:14 |
|
Well ok, then. If you have very specific specialized use cases, ext4 is usable, but not ideal. It's still the best general-use Linux filesystem.
|
# ? Nov 25, 2014 14:00 |
|
evol262 posted:Great that you have backups. You wouldn't need them if you ran ext4+lvm (xfs is still a little picky). Can you elaborate on the xfs comment? I've begun using XFS on all of our EL7 builds and have even begun using it on some EL6 hosts, figuring that if RedHat fully supports it to the point that it's now the default filesystem then I should probably follow suit. So far I, like it's toolset over ext* and like its default mount options (relatime by default is nice), but I haven't played with it for long enough to see any real gotchas for any of my workloads yet. Before I start putting mission-critical stuff on it, I'm curious to hear if anyone has had any poor experiences in using it.
|
# ? Nov 25, 2014 15:27 |
|
Any comments on what special configuration (if any) I should do if I'm hosting Gluster on ext4, instead of xfs? For instance, gluster.org recommends using 512-byte inodes on XFS, and I assume I'd want to do the same for ext4. Any other gotchas? At $WORK, I can use the pre-built Gluster binaries provided by gluster.org, but not anyone's prebuilt XFS binaries, because Politics. (If SGI offered them, we probably could use them, but third-party repackagers like EPEL are right out.) And we don't have the budget to get XFS from Red Hat. So I'm trying to do the best I can. Fortunately, for the project I'm working on, peak performance isn't really a requirement - it's basically a glorified, redundant LAMP stack Web hosting setup.
|
# ? Nov 25, 2014 15:48 |
|
I've noticed that samba is much slower on Fedora 20 than when I was on Archlinux, smb.conf is configured identically so it's not a weird setting (I think). What would cause this odd behavior?
|
# ? Nov 25, 2014 16:11 |
|
Weird Uncle Dave posted:Any comments on what special configuration (if any) I should do if I'm hosting Gluster on ext4, instead of xfs? For instance, gluster.org recommends using 512-byte inodes on XFS, and I assume I'd want to do the same for ext4. Any other gotchas? Technically there is nothing special you have to do with ext4 but there was a bug with gluster 3.3.x and ext4 which would cause it to hang. http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/ Which branch are you thinking of using 3.4, 3.5 3.6 and which features ?
|
# ? Nov 25, 2014 16:36 |
|
I probably will use 3.6. Nothing fancy, just two-node replication and either FUSE or NFS mounting (need to see if there's a meaningful performance difference in my environment).
|
# ? Nov 25, 2014 16:38 |
|
Weird Uncle Dave posted:I probably will use 3.6. Nothing fancy, just two-node replication and either FUSE or NFS mounting (need to see if there's a meaningful performance difference in my environment). Are you planning to use normal files or sparse vm image files ? There's currently a bug in 3.6 which messes up replicating with sparse file systems. It will apparently be fixed soon. I think I'm still right in saying that if you use the nfs client it doesn't automatically fail over if a server goes down. The fuse glusterfs client does so that is something to consider. Also for 3.4, which we use, nfs was faster if you have lots ( millions) of small files not sure if the new features in 3.6 have closed that gap.
|
# ? Nov 25, 2014 17:00 |
|
Weird Uncle Dave posted:Any comments on what special configuration (if any) I should do if I'm hosting Gluster on ext4, instead of xfs? For instance, gluster.org recommends using 512-byte inodes on XFS, and I assume I'd want to do the same for ext4. Any other gotchas? Cidrick posted:Can you elaborate on the xfs comment? I've begun using XFS on all of our EL7 builds and have even begun using it on some EL6 hosts, figuring that if RedHat fully supports it to the point that it's now the default filesystem then I should probably follow suit. So far I, like it's toolset over ext* and like its default mount options (relatime by default is nice), but I haven't played with it for long enough to see any real gotchas for any of my workloads yet. Before I start putting mission-critical stuff on it, I'm curious to hear if anyone has had any poor experiences in using it. XFS is bad about journal replays. It can get into a state where there are unplayed transactions in the journal so you can't fsck it, but there are filesystem problems, so you can't mount it to replay the journal (xfs_repair doesn't do both). This is really apparent if you have an unclean shutdown with the system clock off, but any unclean shutdowns can trigger it. It's the default because we can reasonably expect some systems to have filesystems which exceed the size limits of ext4 inside the RHEL lifecycle, not because it's technically better (though it is a lot better in some ways, it's worse in others).
|
# ? Nov 25, 2014 17:10 |
|
We have a policy, and I agree that it's a dumb policy, that RPMs have to come either from Red Hat, or from the vendor/official site of whatever product we want to use. Third-party repos like EPEL are not allowed. As far as I'm aware, the only way to get the XFS RPMs for RHEL 6 is by paying for a Scalable File System entitlement. And I can't find any prebuilt XFS RPMs from SGI.
|
# ? Nov 25, 2014 18:01 |
|
XFS is part of the kernel, dude.
|
# ? Nov 25, 2014 18:02 |
|
True, but mkfs.xfs isn't. Sure, I could mount it, but how would I have something to mount?
|
# ? Nov 25, 2014 18:05 |
|
use ext4 on rhel 6 and xfs on rhel 7
|
# ? Nov 25, 2014 18:19 |
|
The only problem I've encountered with ext4 has been on an extremely busy mysql replication slave. The amount of overhead from the jbd2 process in the default journaling mode ('data=ordered') got excessive, and the slave started falling behind the master. That master, by the way, had identical OS and hardware but used XFS for its mysql data partition... and was running at HALF the load average of the slave, despite doing all the work for the app stack, while the slave was doing nothing but replicating the database. In any case, we considered switching the slave over to XFS, but because of we found a half-assed solution that worked just as well: remounting the ext4 partition in data=writeback mode (a much more XFS-like journal). That brought the performance in line with the master and we're willing to accept the small risk of corruption from an unclean shutdown.
|
# ? Nov 25, 2014 18:37 |
|
Weird Uncle Dave posted:True, but mkfs.xfs isn't. Sure, I could mount it, but how would I have something to mount? Unless I'm missing something, xfsprogs is definitely in the official RH/CentOS EL6 repos.
|
# ? Nov 25, 2014 18:55 |
|
Weird Uncle Dave posted:True, but mkfs.xfs isn't. Sure, I could mount it, but how would I have something to mount?
|
# ? Nov 25, 2014 19:10 |
|
|
# ? May 10, 2024 21:27 |
|
Weird Uncle Dave posted:True, but mkfs.xfs isn't. Sure, I could mount it, but how would I have something to mount? Consider using a centos xfsprogs RPM. They're officially part of Red Hat now, so maybe it's good enough for your managers.
|
# ? Nov 25, 2014 20:27 |