Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ToxicFrog
Apr 26, 2008


Ashex posted:

My media server has been running Archlinux for the last 6 years and has worked really for me, unfortunately I made the fatal error of going with f2fs when I upgraded to an SSD and after a power outage. I've now got some intense disk corruption I'm recovering from so it's looking like I'll need to do a fresh install leaving me with two choices:

  • Reinstall Archlinux and copy over database/config files
  • Change distro and copy over database/config files

The main reason I've stuck with archlinux so long is because of AUR/ABS, the ability to custom build anything and easily turn it into a package has spoiled me. Is there another distro I should consider? I'm familiar with SPEC files so something rpm based wouldn't be scary.

I've been really happy with SUSE Tumbleweed, but I have no idea how the SUSE OBS stacks up against ABS, because I've never used it -- generally someone else has already built whatever I want on OBS and I just need to add the repo.

Adbot
ADBOT LOVES YOU

Hollow Talk
Feb 2, 2014

Ashex posted:

My media server has been running Archlinux for the last 6 years and has worked really for me, unfortunately I made the fatal error of going with f2fs when I upgraded to an SSD and after a power outage. I've now got some intense disk corruption I'm recovering from so it's looking like I'll need to do a fresh install leaving me with two choices:

  • Reinstall Archlinux and copy over database/config files
  • Change distro and copy over database/config files

The main reason I've stuck with archlinux so long is because of AUR/ABS, the ability to custom build anything and easily turn it into a package has spoiled me. Is there another distro I should consider? I'm familiar with SPEC files so something rpm based wouldn't be scary.

ToxicFrog basically said it all. openSUSE Tumbleweed (née Factory) is a rolling release distribution, and the OBS is pretty fantastic if you are already familiar with .SPEC files. It does automatic (dependency-based) rebuilds etc. The only caveat is that you are only allowed to build open/free software on the OBS, i.e. no proprietary licenses are allowed as per their legal team.

I've had very few problems over the years with both OBS and openSUSE itself, and they support a wide range of desktop environments etc.

evol262
Nov 30, 2010
#!/usr/bin/perl

Hollow Talk posted:

ToxicFrog basically said it all. openSUSE Tumbleweed (née Factory) is a rolling release distribution, and the OBS is pretty fantastic if you are already familiar with .SPEC files. It does automatic (dependency-based) rebuilds etc. The only caveat is that you are only allowed to build open/free software on the OBS, i.e. no proprietary licenses are allowed as per their legal team.

I've had very few problems over the years with both OBS and openSUSE itself, and they support a wide range of desktop environments etc.
Fedora has Copr/Koji as an analogue to this, and lpf for building stuff that doesn't do redistribution.

I kinda say just use whatever distro you like the best. If that's arch, use arch. If you wanna experiment, SuSE is great.

Ashex
Jun 25, 2007

These pipes are cleeeean!!!

evol262 posted:

Fedora has Copr/Koji as an analogue to this, and lpf for building stuff that doesn't do redistribution.

I kinda say just use whatever distro you like the best. If that's arch, use arch. If you wanna experiment, SuSE is great.

Interesting, I was leaning towards Fedora but I'm not sure there is an upgrade path between major releases (main reason I love rolling release). I know with CentOS the general recommendation is to backup and reinstall.

evol262
Nov 30, 2010
#!/usr/bin/perl

Ashex posted:

Interesting, I was leaning towards Fedora but I'm not sure there is an upgrade path between major releases (main reason I love rolling release). I know with CentOS the general recommendation is to backup and reinstall.

The centos/RHEL recommendation is because they're branched off, so 6->7 is basically F12->F19. Fedora has an upgrade tool (fedup) which is necessary for big changes, like the move from /bin to /usr/bin. Other than that, you can mostly switch repos, import the new GPG key, and upgrade. Fedora is supported for 2 releases (~1 yr), basically, and upgrading from ${release-2} is supported through official tooling. Anything more than that is up to you.

Or you can just run rawhide and be on a rolling release. Not that I'd necessarily recommend that.

Incidentally, upgrading between major CentOS/RHEL releases was always technically possible, just fiddly, and generally involved hacking RPM up to read the new changes (xz, filedigests, whatever) by pulling in Fedora packages from when it happened, then adding a new glibc, librpm, rpm-python, and enough other base tools for rpm/yum to work. There's now an upgrade tool that's officially supported. Not that it'll always work, but at least it's something, and companies don't have to hire me to write them upgrade tools anymore.

shodanjr_gr
Nov 20, 2007
We got a FOG server installation at work running on Ubuntu 12.04 that was running out of space (FOG is an OSS computer management system, allowing capture/deployment of OS images over a network via PXE boot. It's good stuff). The important bit from here is that fog needs to mount the /images/ directory over NFS during the imaging process.

We figured that we would solve the imaging issue by moving all operating system images to a NAS, and then mounting that location via NSF on startup onto the original location where the images where stored. Effectively, the server would mount an NFS share from IP_A and then export it over a different IP_B. However, any attempts to access this chain-NFSed share fail with a permission denied error.

Is this kind of chain-mounting allowed in linux or do we have to fix this in a different way?

pseudorandom name
May 6, 2007

NFS requires that the underlying filesystem satisfy some requirements that it itself does not satisfy.

shodanjr_gr
Nov 20, 2007

pseudorandom name posted:

NFS requires that the underlying filesystem satisfy some requirements that it itself does not satisfy.

e: Thanks for the fast reply!

Would there be any other way to work around this? I believe the NAS also supports CIFS export (which I don't think can be daisy-chained over NFS either) or probably SMB...

Ashex
Jun 25, 2007

These pipes are cleeeean!!!

evol262 posted:

The centos/RHEL recommendation is because they're branched off, so 6->7 is basically F12->F19. Fedora has an upgrade tool (fedup) which is necessary for big changes, like the move from /bin to /usr/bin. Other than that, you can mostly switch repos, import the new GPG key, and upgrade. Fedora is supported for 2 releases (~1 yr), basically, and upgrading from ${release-2} is supported through official tooling. Anything more than that is up to you.

Or you can just run rawhide and be on a rolling release. Not that I'd necessarily recommend that.

Incidentally, upgrading between major CentOS/RHEL releases was always technically possible, just fiddly, and generally involved hacking RPM up to read the new changes (xz, filedigests, whatever) by pulling in Fedora packages from when it happened, then adding a new glibc, librpm, rpm-python, and enough other base tools for rpm/yum to work. There's now an upgrade tool that's officially supported. Not that it'll always work, but at least it's something, and companies don't have to hire me to write them upgrade tools anymore.

Ah that's good to hear, I think I'll go with Fedora then. I'm tempted to get the beta since final release in only a couple weeks away but perhaps this will be a good way to test the upgrade path.

Major Ryan
May 11, 2008

Completely blank

evol262 posted:

There's now an upgrade tool that's officially supported. Not that it'll always work, but at least it's something, and companies don't have to hire me to write them upgrade tools anymore.

My experience is that the upgrade tool works. I'm not sure I'd use it for anything super mega critical, but I've used it to upgrade a few dev/test servers from CentOS 6 to 7 and it's been relatively pain-free.

It chucks out huge amounts of reporting before you do any updates, so you should also know what won't work before you get started. GUI being one of the messier things, but for servers that's possibly/probably not an issue.

evol262
Nov 30, 2010
#!/usr/bin/perl

Ashex posted:

Ah that's good to hear, I think I'll go with Fedora then. I'm tempted to get the beta since final release in only a couple weeks away but perhaps this will be a good way to test the upgrade path.

I'd just start with 21, honestly. It's stable, and it's going through a reorg into workstation/server/etc. Which can potentially be annoying during upgrades, but it's never happened before and will never happen again.

Ashex
Jun 25, 2007

These pipes are cleeeean!!!

evol262 posted:

I'd just start with 21, honestly. It's stable, and it's going through a reorg into workstation/server/etc. Which can potentially be annoying during upgrades, but it's never happened before and will never happen again.

Too late for that! Already installed 20 and am getting it setup, I realized while migrating configs and stuff over that I should use docker containers for the unpackaged things like commafeed/hastebin.

I am not a book
Mar 9, 2013
Am I alright using btrfs on an SSD?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

I am not a book posted:

Am I alright using btrfs on an SSD?

Do not use btrfs.

I am not a book
Mar 9, 2013

Suspicious Dish posted:

Do not use btrfs.

I'm not going to use ext*, so unless you're suggesting reiser4 you aren't really helping.
edit to be less sarcastic:
I'm explicitly interested in what's up and coming and I've ran it for years on spinning disks. I want to know whether it does anything that might damage or impair the performance of an SSD.

evol262
Nov 30, 2010
#!/usr/bin/perl

I am not a book posted:

I'm not going to use ext*, so unless you're suggesting reiser4 you aren't really helping.
edit to be less sarcastic:
I'm explicitly interested in what's up and coming and I've ran it for years on spinning disks. I want to know whether it does anything that might damage or impair the performance of an SSD.

I haven't seen filesystem zealotry in years.

ext* is probably the safest, most widely supported fs you can use. You're free to make other choices, and btrfs can be a reasonable one if you don't want to use lvm. It's safe for ssds. But there's not a good reason to be vehemently against ext4

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Feel free to use btrfs and watch it lose all your data like it did to me three times then. You're welcome to have strong opinions and make bad life choices as a result.

There's also XFS which is now the default in RHEL7.

I am not a book
Mar 9, 2013

Suspicious Dish posted:

Feel free to use btrfs and watch it lose all your data like it did to me three times then. You're welcome to have strong opinions and make bad life choices as a result.

Yeah, intentionally running unstable software definitely means that I won't do regular backups. :thumbsup:

evol262
Nov 30, 2010
#!/usr/bin/perl

I am not a book posted:

Yeah, intentionally running unstable software definitely means that I won't do regular backups. :thumbsup:

Make sure you don't cut yourself on that edge.

Great that you have backups. You wouldn't need them if you ran ext4+lvm (xfs is still a little picky).

But so you know, "I know what I'm doing so I'll just be defensive and sarcastic when somebody questions my choices" and "I ask for advice on an internet comedy forum" makes a fun dichotomy.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

Great that you have backups. You wouldn't need them if you ran ext4+lvm (xfs is still a little picky).
Are you new to technology? Did you seriously imply that backups aren't necessary for, say, everyone ever?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

I am not a book posted:

Yeah, intentionally running unstable software definitely means that I won't do regular backups. :thumbsup:

1. What will you store the backups on if not btrfs?

2. Backups are nice but after your btrfs hosed your OS and you have to reinstall, it gets old quick.

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


Let's put it another way: Is there even a single compelling reason for not using ext4 on a Linux system?

kujeger
Feb 19, 2004

OH YES HA HA

KozmoNaut posted:

Let's put it another way: Is there even a single compelling reason for not using ext4 on a Linux system?

btrfs' snapshots and subvolumes are pretty drat sexy.

I use snapshots for quick and frequent backups, btrfs-send them to another box, and have regular old backups on ext4 as well.

jre
Sep 2, 2011

To the cloud ?



KozmoNaut posted:

Let's put it another way: Is there even a single compelling reason for not using ext4 on a Linux system?

Some things work better on xfs e.g. Gluster.

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


Well ok, then. If you have very specific specialized use cases, ext4 is usable, but not ideal.

It's still the best general-use Linux filesystem.

Cidrick
Jun 10, 2001

Praise the siamese

evol262 posted:

Great that you have backups. You wouldn't need them if you ran ext4+lvm (xfs is still a little picky).

Can you elaborate on the xfs comment? I've begun using XFS on all of our EL7 builds and have even begun using it on some EL6 hosts, figuring that if RedHat fully supports it to the point that it's now the default filesystem then I should probably follow suit. So far I, like it's toolset over ext* and like its default mount options (relatime by default is nice), but I haven't played with it for long enough to see any real gotchas for any of my workloads yet. Before I start putting mission-critical stuff on it, I'm curious to hear if anyone has had any poor experiences in using it.

Weird Uncle Dave
Sep 2, 2003

I could do this all day.

Buglord
Any comments on what special configuration (if any) I should do if I'm hosting Gluster on ext4, instead of xfs? For instance, gluster.org recommends using 512-byte inodes on XFS, and I assume I'd want to do the same for ext4. Any other gotchas?

At $WORK, I can use the pre-built Gluster binaries provided by gluster.org, but not anyone's prebuilt XFS binaries, because Politics. (If SGI offered them, we probably could use them, but third-party repackagers like EPEL are right out.) And we don't have the budget to get XFS from Red Hat. So I'm trying to do the best I can. Fortunately, for the project I'm working on, peak performance isn't really a requirement - it's basically a glorified, redundant LAMP stack Web hosting setup.

Ashex
Jun 25, 2007

These pipes are cleeeean!!!
I've noticed that samba is much slower on Fedora 20 than when I was on Archlinux, smb.conf is configured identically so it's not a weird setting (I think). What would cause this odd behavior?

jre
Sep 2, 2011

To the cloud ?



Weird Uncle Dave posted:

Any comments on what special configuration (if any) I should do if I'm hosting Gluster on ext4, instead of xfs? For instance, gluster.org recommends using 512-byte inodes on XFS, and I assume I'd want to do the same for ext4. Any other gotchas?

At $WORK, I can use the pre-built Gluster binaries provided by gluster.org, but not anyone's prebuilt XFS binaries, because Politics. (If SGI offered them, we probably could use them, but third-party repackagers like EPEL are right out.) And we don't have the budget to get XFS from Red Hat. So I'm trying to do the best I can. Fortunately, for the project I'm working on, peak performance isn't really a requirement - it's basically a glorified, redundant LAMP stack Web hosting setup.

Technically there is nothing special you have to do with ext4 but there was a bug with gluster 3.3.x and ext4 which would cause it to hang. http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/

Which branch are you thinking of using 3.4, 3.5 3.6 and which features ?

Weird Uncle Dave
Sep 2, 2003

I could do this all day.

Buglord
I probably will use 3.6. Nothing fancy, just two-node replication and either FUSE or NFS mounting (need to see if there's a meaningful performance difference in my environment).

jre
Sep 2, 2011

To the cloud ?



Weird Uncle Dave posted:

I probably will use 3.6. Nothing fancy, just two-node replication and either FUSE or NFS mounting (need to see if there's a meaningful performance difference in my environment).

Are you planning to use normal files or sparse vm image files ? There's currently a bug in 3.6 which messes up replicating with sparse file systems. It will apparently be fixed soon.

I think I'm still right in saying that if you use the nfs client it doesn't automatically fail over if a server goes down. The fuse glusterfs client does so that is something to consider. Also for 3.4, which we use, nfs was faster if you have lots ( millions) of small files not sure if the new features in 3.6 have closed that gap.

evol262
Nov 30, 2010
#!/usr/bin/perl

Weird Uncle Dave posted:

Any comments on what special configuration (if any) I should do if I'm hosting Gluster on ext4, instead of xfs? For instance, gluster.org recommends using 512-byte inodes on XFS, and I assume I'd want to do the same for ext4. Any other gotchas?

At $WORK, I can use the pre-built Gluster binaries provided by gluster.org, but not anyone's prebuilt XFS binaries, because Politics. (If SGI offered them, we probably could use them, but third-party repackagers like EPEL are right out.) And we don't have the budget to get XFS from Red Hat. So I'm trying to do the best I can. Fortunately, for the project I'm working on, peak performance isn't really a requirement - it's basically a glorified, redundant LAMP stack Web hosting setup.
Use XFS? We're an open source company. XFS is not limited to RHEL. SGI opened it years and years ago, and I'd be amazed if there were any distros which didn't support XFS.

Cidrick posted:

Can you elaborate on the xfs comment? I've begun using XFS on all of our EL7 builds and have even begun using it on some EL6 hosts, figuring that if RedHat fully supports it to the point that it's now the default filesystem then I should probably follow suit. So far I, like it's toolset over ext* and like its default mount options (relatime by default is nice), but I haven't played with it for long enough to see any real gotchas for any of my workloads yet. Before I start putting mission-critical stuff on it, I'm curious to hear if anyone has had any poor experiences in using it.

XFS is bad about journal replays. It can get into a state where there are unplayed transactions in the journal so you can't fsck it, but there are filesystem problems, so you can't mount it to replay the journal (xfs_repair doesn't do both). This is really apparent if you have an unclean shutdown with the system clock off, but any unclean shutdowns can trigger it.

It's the default because we can reasonably expect some systems to have filesystems which exceed the size limits of ext4 inside the RHEL lifecycle, not because it's technically better (though it is a lot better in some ways, it's worse in others).

Weird Uncle Dave
Sep 2, 2003

I could do this all day.

Buglord
We have a policy, and I agree that it's a dumb policy, that RPMs have to come either from Red Hat, or from the vendor/official site of whatever product we want to use. Third-party repos like EPEL are not allowed. As far as I'm aware, the only way to get the XFS RPMs for RHEL 6 is by paying for a Scalable File System entitlement. And I can't find any prebuilt XFS RPMs from SGI.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
XFS is part of the kernel, dude.

Weird Uncle Dave
Sep 2, 2003

I could do this all day.

Buglord
True, but mkfs.xfs isn't. Sure, I could mount it, but how would I have something to mount? :iiam:

spankmeister
Jun 15, 2008






use ext4 on rhel 6 and xfs on rhel 7

Powered Descent
Jul 13, 2008

We haven't had that spirit here since 1969.

The only problem I've encountered with ext4 has been on an extremely busy mysql replication slave. The amount of overhead from the jbd2 process in the default journaling mode ('data=ordered') got excessive, and the slave started falling behind the master. That master, by the way, had identical OS and hardware but used XFS for its mysql data partition... and was running at HALF the load average of the slave, despite doing all the work for the app stack, while the slave was doing nothing but replicating the database.

In any case, we considered switching the slave over to XFS, but because of :effort: we found a half-assed solution that worked just as well: remounting the ext4 partition in data=writeback mode (a much more XFS-like journal). That brought the performance in line with the master and we're willing to accept the small risk of corruption from an unclean shutdown.

Cidrick
Jun 10, 2001

Praise the siamese

Weird Uncle Dave posted:

True, but mkfs.xfs isn't. Sure, I could mount it, but how would I have something to mount? :iiam:

Unless I'm missing something, xfsprogs is definitely in the official RH/CentOS EL6 repos.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Weird Uncle Dave posted:

True, but mkfs.xfs isn't. Sure, I could mount it, but how would I have something to mount? :iiam:
heavens to betsy a yum install

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

Weird Uncle Dave posted:

True, but mkfs.xfs isn't. Sure, I could mount it, but how would I have something to mount? :iiam:

Consider using a centos xfsprogs RPM. They're officially part of Red Hat now, so maybe it's good enough for your managers.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply