Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



Saukkis posted:

I think it would be more practical to mount the NFS on /usr/local, that would be more compatible with modern Linux systems. There is too much stuff from the distro that installs under /usr.

The old division was /usr for shared binaries from network and /usr/local for locally installed. Nowadays it's /usr for binaries provided by the distro and /usr/local for binaries provided by the local organisation.
The first part made me wonder how many things are there in a fully-merged /usr for a Linux distro that does it?

Here are the number of binaries, system binaries, multi-user binaries, and multi-user system binaries in FreeBSD as of 14-CURRENT a few months ago:

bc -e "`ls /bin | wc -l`+`ls /sbin | wc -l`+`ls /usr/bin | wc -l`+`ls /usr/sbin | wc -l`"
963


EDIT: I find it utterly confusing how you're supposed to make that distinction if everything's merged into /usr.
Similarily, I have no idea how you're supposed to do a directory listing and find what you need when you have to have to page through them all with more/less, or scroll-lock (I assume that's not been removed, but who knows at this point :v:)

BlankSystemDaemon fucked around with this message at 21:31 on Feb 21, 2022

Adbot
ADBOT LOVES YOU

BattleMaster
Aug 14, 2000

Saukkis posted:

I think it would be more practical to mount the NFS on /usr/local, that would be more compatible with modern Linux systems. There is too much stuff from the distro that installs under /usr.

The old division was /usr for shared binaries from network and /usr/local for locally installed. Nowadays it's /usr for binaries provided by the distro and /usr/local for binaries provided by the local organisation.

I'm serving a /usr from the same distro so at least that issue is sidestepped. It probably would be a lot better to have a minimal /usr and a network mounted, far more full /usr/local, but I wanted to see how extreme I could get.

This whole thing started with an experiment to see how overzealously paranoid I could get about minimizing writes to an SSD, even though with a modern SSD that isn't an issue.

With Debian 11, if you mount at least /var, /home, and swap to a non-SSD, mount /tmp as a tmpfs, and install resolvconf (which symlinks /etc/resolve.conf to a boot time generated one in /run), you can mount / as read-only and everything I've tried functions perfectly. You need to remount it as rw to install or reconfigure software, which isn't a huge big deal in my use. It also inhibits user/password/group changes without doing so, which could be a problem for a system used by people without root access. Also, mounting /media as a tmpfs lets things create mount points at mount time, like xfce does for removable storage and hard drives not in fstab when you open them.

I have a server (shell only) and a desktop (xfce) that have been running for like a month with this configuration just fine. They both have a new 128 GB SATA SSD and an old 160 GB drive I kept from older machines.

So getting more extreme than this I was wondering if you could get away with not even writing any of the base /usr stuff to begin with. Well, I could put /usr on a spinny drive, but where's the fun in that? I have my server sharing its /usr and network mounting it lets me use a bunch of plangs and utilities that the VM doesn't have so it's mostly successful - though seems like Debian 11 gets cranky if it doesn't have a minimal /usr during shutdown. And running stuff that's dependent on configuration files in /etc would be a hassle to set up I'm sure.

I don't really have a practical use case for full /usr network mounting, but mounting in /usr/local to get access to a big pile of utilities without actually having to install them locally is tempting...

BattleMaster fucked around with this message at 22:11 on Feb 21, 2022

ExcessBLarg!
Sep 1, 2001
Well, mind you, it's not typical to have all packages in a distribution's repository installed on all systems so the number is variable.

When I'm looking for a command I usually have an idea of what package it's part of, so I'll list all files in the package and grep the /bin ones. As a bonus, this actually preserves the distinction between /bin and /usr/bin even though they're all actually located in /usr/bin now, if that's something that's meaningful to you.

ExcessBLarg!
Sep 1, 2001

BattleMaster posted:

This whole thing started with an experiment to see how overzealously paranoid I could get about minimizing writes to an SSD, even though with a modern SSD that isn't an issue.
That sounds pretty rough actually, since the very accesses that are most latency sensitive are the ones going to disk with the worst access times.

BattleMaster
Aug 14, 2000

Well, the computers are 15 years old so I wasn't shooting for peak performance :v:

One is a Dell PowerEdge 840 which I've had for a while but only recently sprung for the $20 needed to upgrade the anemic Celeron D it came with to a quad core 2.66 GHz Xeon X3230. The other is some HP workstation desktop that I found dumped on a lawn that originally came with a Pentium D (and wow I had never heard a CPU fan spin up to full speed and stay there during file browsing before, the thermals are as bad as the rumors) but I fitted it with a dual-core 2.4 GHz Xeon 3060 that works perfectly even though the Q965 chipset's datasheet says nothing about Xeons working with it.

Edit: Though as old as things are, the performance I'm getting from the fileserver and RSS aggregation stuff I'm running on the server is actually pretty good - I have some non-terrible drives in the server for file storage and I can peg the gigabit ethernet on single large file transfers.

edit 2: Yes I have dumb hobbies, why do you ask?

BattleMaster fucked around with this message at 23:02 on Feb 21, 2022

RFC2324
Jun 7, 2012

http 418

is it possible to tell scp/rsync to not execute the .bashrc? on my jump boxes at work I am calling zsh from my bashrc(for some reason it can't be changed, presumably because of the ldap implementation, this is the solution the admins of the jump boxes provided) and rsync/scp fails if I don't comment that out.

ExcessBLarg!
Sep 1, 2001
I'm not certain, but you should be able to execute zsh from .bash_profile instead of .bashrc, which would only get executed on a login shell.

If that doesn't work the other trick I'd use would be to check an environment variable for something set by scp and not call zsh when that is present.

RFC2324
Jun 7, 2012

http 418

and another one making me nuts. why doesn't this work?

pre:
EXCLUDES="{cache,tmp*,.vault-token,.cache}"

for remote in $(awk -F, '{print $2}' $SCRIPTDIR/hosts.csv);do rsync -auhHS --exclude=$EXCLUDES $remote:~/ $TEMPDIR;done
running with bash -x gives me the following line:
pre:
+ rsync -auhHS '--exclude={cache,tmp*,.vault-token,.cache}' '<foo>:~/' /<bar>/
all I can think is that it wrapping it in quotes like that is making it ignore the line, but that doesn't seem right

e: grabbed the wrong log line

RFC2324 fucked around with this message at 06:31 on Feb 23, 2022

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

ExcessBLarg! posted:

I think the point is that if you have a remote-mounted /usr then all you need to do to perform an system update is just update the NFS export and assume that running processes will eventually restart and /etc changes aren't a thing. It's the exact opposite philosophy of containers.

I don't think that would be practical. You have several machines sharing /usr and when you run package manager update on one of them it will update bunch of stuff under /usr. All the other machines now will have access to the updated binaries too, but they will still think they are running the old versions of the packages. You run an update on another machine and it will redo the updates in /usr.


BlankSystemDaemon posted:

The first part made me wonder how many things are there in a fully-merged /usr for a Linux distro that does it?

I checked a Linux server at work with a minimal install. There were over 150 packages that owned files under /usr/bin.


BattleMaster posted:

I'm serving a /usr from the same distro so at least that issue is sidestepped. It probably would be a lot better to have a minimal /usr and a network mounted, far more full /usr/local, but I wanted to see how extreme I could get.

This whole thing started with an experiment to see how overzealously paranoid I could get about minimizing writes to an SSD, even though with a modern SSD that isn't an issue.

I suspect there are so few writes in /usr that this isn't worth the effort. You should do a more scientific approach, divide your system to as many separate mount points as you can bother, then look at the statistics on how many writes they receive.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
What are good conferences in the US with some topics on Linux internal crap? Like, the OS as a system and the kernel?

I see the Open Source Summit North America in Austin but I wonder if there are some other ones I should keep an eye on.

BlankSystemDaemon
Mar 13, 2009



Saukkis posted:

I checked a Linux server at work with a minimal install. There were over 150 packages that owned files under /usr/bin.
Is that a lot, though?
If each package has 1 file (which seems unlikely, but go with me here), it means that a Linux distribution has less files than FreeBSD which doesn't seem right given that FreeBSD is ~16 million lines of code and something like Debian totals at ~85 million.

ls /usr/bin | wc -l?

ExcessBLarg!
Sep 1, 2001

RFC2324 posted:

and another one making me nuts. why doesn't this work?
pre:
EXCLUDES="{cache,tmp*,.vault-token,.cache}"

for remote in $(awk -F, '{print $2}' $SCRIPTDIR/hosts.csv);do rsync -auhHS --exclude=$EXCLUDES $remote:~/ $TEMPDIR;done
Brace expansion only takes place when the statement is evaled and so can't directly be stored in a variable. You can do it with a sub-shell though. You probably want something like:
pre:
EXCLUDES=$(echo --exclude={cache,tmp*,.vault-token,.cache})

for remote in $(awk -F, '{print $2}' $SCRIPTDIR/hosts.csv);do rsync -auhHS $EXCLUDES $remote:~/ $TEMPDIR;done

ExcessBLarg!
Sep 1, 2001

Saukkis posted:

I don't think that would be practical. You have several machines sharing /usr and when you run package manager update on one of them it will update bunch of stuff under /usr. All the other machines now will have access to the updated binaries too, but they will still think they are running the old versions of the packages. You run an update on another machine and it will redo the updates in /usr.
That's the point, you run the update on the host with the NFS export and you don't need to update the machines that mount /usr.

Consider a more comprehensive example: You have a compute cluster where the compute nodes have no locally attached storage. Instead, the compute nodes PXEBOOT a kernel and ramdisk that contains a skeleton / and /etc, and NFS mounts /usr. It then runs the job manager and does whatever compute nodes do. Once your scheduled maintenance window hits, you run a distibution update on your cluster manager (or whatever exports /usr) and regenerate the ramdisk, then you reboot your compute nodes and they PXEBOOT the updated kernel, skeleton / and /etc, and NFS mount the updated /usr.

I worked on systems like this in the mid-00s, back when mechanical disks were the only option for locally-attached storage and so having diskless 1U compute nodes was definitely of value. These days though there's little reason not to include an NVMe SSD on everything so it may well make more sense to have a "full" OS install on each node and run your Docker/Kubernetes/whatever the kids do these days.

ExcessBLarg! fucked around with this message at 15:08 on Feb 23, 2022

ExcessBLarg!
Sep 1, 2001

BlankSystemDaemon posted:

If each package has 1 file (which seems unlikely, but go with me here), it means that a Linux distribution has less files than FreeBSD which doesn't seem right given that FreeBSD is ~16 million lines of code and something like Debian totals at ~85 million.
It's extremely rare for a single machine to have "all of Debian" installed. So yes, a minimum installation--just running debootstrap or something--won't have very much installed.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I am having a really strange issue keep recurring: I keep having systemd-timesyncd, which I'm using for ntp, keep forgetting my configuration and revert back to the default NTP servers (ubuntu.com), causing clock disagreements across my machines.

I have this ansible role among the configuration management for my machines: https://github.com/stuvusIT/systemd-timesyncd . It's what I'm using to set up our specific NTP servers. All that this role does is replace /etc/systemd/timesyncd.conf with the correct configuration.

However, after an indeterminate amount of time, for reasons entirely unclear to me, the changes that this is making to /etc/systemd/timesyncd.conf are gone, and the default one is in place, with nothing but comment lines, and shortly thereafter I get clock drift, because the ubuntu NTP servers are not available in my environment. Clock drift is super dangerous, because these machines are a Ceph cluster!

What would be clobbering changes that I'm making to this service config file?

RFC2324
Jun 7, 2012

http 418

ExcessBLarg! posted:

Brace expansion only takes place when the statement is evaled and so can't directly be stored in a variable. You can do it with a sub-shell though. You probably want something like:
pre:
EXCLUDES=$(echo --exclude={cache,tmp*,.vault-token,.cache})

for remote in $(awk -F, '{print $2}' $SCRIPTDIR/hosts.csv);do rsync -auhHS $EXCLUDES $remote:~/ $TEMPDIR;done

This is really ugly. Is there a reasonable way to store it as an array, and then have it expand out to '--exclude=cache --exclude=.vault-token ...' since that is what the command is doing on the cli? Would

$( --exclude={cache,tmp*,.vault-token,.cache})

Do that?

I'd just try, but not near my laptop atm and don't want to lose it

waffle iron
Jan 16, 2004

Twerk from Home posted:

I am having a really strange issue keep recurring: I keep having systemd-timesyncd, which I'm using for ntp, keep forgetting my configuration and revert back to the default NTP servers (ubuntu.com), causing clock disagreements across my machines.

I have this ansible role among the configuration management for my machines: https://github.com/stuvusIT/systemd-timesyncd . It's what I'm using to set up our specific NTP servers. All that this role does is replace /etc/systemd/timesyncd.conf with the correct configuration.

However, after an indeterminate amount of time, for reasons entirely unclear to me, the changes that this is making to /etc/systemd/timesyncd.conf are gone, and the default one is in place, with nothing but comment lines, and shortly thereafter I get clock drift, because the ubuntu NTP servers are not available in my environment. Clock drift is super dangerous, because these machines are a Ceph cluster!

What would be clobbering changes that I'm making to this service config file?
You could also create a drop in /etc/systemd/timesyncd.conf.d/*.conf or /run/systemd/timesyncd.conf.d/*.conf to override the defaults. That is guaranteed not to be overwritten.

waffle iron fucked around with this message at 19:08 on Feb 23, 2022

ExcessBLarg!
Sep 1, 2001

RFC2324 posted:

Is there a reasonable way to store it as an array, and then have it expand out to '--exclude=cache --exclude=.vault-token ...' since that is what the command is doing on the cli?
Try:
pre:
EXCLUDES=(--exclude={cache,tmp*,.vault-token,.cache})

for remote in $(awk -F, '{print $2}' $SCRIPTDIR/hosts.csv);do rsync -auhHS "${EXCLUDES[@]}" $remote:~/ $TEMPDIR;done

RFC2324
Jun 7, 2012

http 418

ExcessBLarg! posted:

Try:
pre:
EXCLUDES=(--exclude={cache,tmp*,.vault-token,.cache})

for remote in $(awk -F, '{print $2}' $SCRIPTDIR/hosts.csv);do rsync -auhHS "${EXCLUDES[@]}" $remote:~/ $TEMPDIR;done

this doesn't work, it just populated cache and ignored the other options, but the echo does. I hate ugly hacks like that lol.

Thanks for all the help!

Kevin Bacon
Sep 22, 2010

dont know what the etiquette around distro recommendations are in this thread, but....


im maybe not extremely up to date, but my impression is that arch based distros tend to break too often, debian is too outdated, ubuntu is too weird/absolutely proprietary, then theres mint/popos/zorin which i have a weird aversion towards that i cant explain

ive used opensuse tw before which i really like and is my first choice. i wanted to use fedora, but i have to manually patch the kernel to enable ec_sys functionality for proper fan control on my laptop which is not ideal. opensuse was stable which i like, but it did start to break probably mainly due to laptop nvidia hybrid graphics. easily fixed with snapper, but then im thinking if im gonna have to rely on snapper for stability, is there any reason then not to just go for an arch based distro outside of aur repo security concerns?

Hughmoris
Apr 21, 2007
Let's go to the abyss!
Any recommendations on where to learn modern shell scripting? Not to become an expert but to get enough familiarity where I could put it on my resume for data analyst/engineering jobs. I'm comfortable with the basic concepts in programming and hacking away in Python.

Beginner project idea to help learn:

code:
- Use AWS Lambda to launch AWS EC2
- Execute bash script
    - use cURL to get several CSV feeds from API and save as CSV files
    - compress CSV files
    - call aws cli to upload csv files to s3
- Use AWS Lambda to stop EC2
That would all be pretty simple with Python but I figure I'd try my hand with a shell script.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Kevin Bacon posted:

dont know what the etiquette around distro recommendations are in this thread, but....


im maybe not extremely up to date, but my impression is that arch based distros tend to break too often, debian is too outdated, ubuntu is too weird/absolutely proprietary, then theres mint/popos/zorin which i have a weird aversion towards that i cant explain

ive used opensuse tw before which i really like and is my first choice. i wanted to use fedora, but i have to manually patch the kernel to enable ec_sys functionality for proper fan control on my laptop which is not ideal. opensuse was stable which i like, but it did start to break probably mainly due to laptop nvidia hybrid graphics. easily fixed with snapper, but then im thinking if im gonna have to rely on snapper for stability, is there any reason then not to just go for an arch based distro outside of aur repo security concerns?

Difficult to recommend anything without knowing what you're planning on using the computer for.

Arch is fine for a hobby OS. Since packages are pushed out very soon after upstream release there's occasionally some minor breakage that doesn't get caught in testing.

Kevin Bacon
Sep 22, 2010

Keito posted:

Difficult to recommend anything without knowing what you're planning on using the computer for.

Arch is fine for a hobby OS. Since packages are pushed out very soon after upstream release there's occasionally some minor breakage that doesn't get caught in testing.

oh yeah thats an important detail. its my laptop so when im home its honestly pretty much a browser/youtube machine, and when im traveling with it i tend to want to play some games on it too. so nvidia hybrid graphics compatability, general day-to-day stability (im not against troubleshooting and tinkering but there comes a point where i just want to watch youtube videos in the couch instead of trying to figure out why pipewire is broken or why my wm is suddenly crashing - which i guess makes arch not exactly ideal, but does snapper/timeshift mitigate this in any meaningful way?) is something i value. but i also want that perpetual new nvidia driver that is supposed to make everything work good now rather than later, which probably puts me into having a cake and eating it too territory

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Kevin Bacon posted:

oh yeah thats an important detail. its my laptop so when im home its honestly pretty much a browser/youtube machine, and when im traveling with it i tend to want to play some games on it too. so nvidia hybrid graphics compatability, general day-to-day stability (im not against troubleshooting and tinkering but there comes a point where i just want to watch youtube videos in the couch instead of trying to figure out why pipewire is broken or why my wm is suddenly crashing - which i guess makes arch not exactly ideal, but does snapper/timeshift mitigate this in any meaningful way?) is something i value. but i also want that perpetual new nvidia driver that is supposed to make everything work good now rather than later, which probably puts me into having a cake and eating it too territory

If you want day to day stability, I'd just pick either Debian or Ubuntu, depending if you want newer packages, easier access to non-libre stuff, and potentially newer hardware support / drivers in Ubuntu, or the community-driven and more long-term stable Debian project.

Edit: I realize that this is a holy war and I'm recommending the most boring options, but they're among the safest.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Hughmoris posted:

Any recommendations on where to learn modern shell scripting? Not to become an expert but to get enough familiarity where I could put it on my resume for data analyst/engineering jobs. I'm comfortable with the basic concepts in programming and hacking away in Python.

Beginner project idea to help learn:

code:
- Use AWS Lambda to launch AWS EC2
- Execute bash script
    - use cURL to get several CSV feeds from API and save as CSV files
    - compress CSV files
    - call aws cli to upload csv files to s3
- Use AWS Lambda to stop EC2
That would all be pretty simple with Python but I figure I'd try my hand with a shell script.
Just use Python. Any reason you want to try to do that with bash?

Hughmoris
Apr 21, 2007
Let's go to the abyss!

Bob Morales posted:

Just use Python. Any reason you want to try to do that with bash?

This is just practicing on personal projects, the bigger one becoming more familiar with AWS in hopes of landing a gig. Seems like a nice tool to have in the toolkit, even if the knowledge is basic.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Hughmoris posted:

Any recommendations on where to learn modern shell scripting? Not to become an expert but to get enough familiarity where I could put it on my resume for data analyst/engineering jobs. I'm comfortable with the basic concepts in programming and hacking away in Python.

Beginner project idea to help learn:


If you're using Bash, I recommend running shellcheck on your scripts which will both highlight any potential issues and also teach a lot of best practices, with good explanations about why they exist.

Some random tips:
- all vars are global by default. It's common to use the convention UPPERCASE for globals, lowercase for locals.
- use `[[ ]]` for conditionals vs `[ ]`, it's more powerful (but less portable)
- use `set -o errexit -o xtrace` at the top, this will both exit as soon as an error is encountered, and print out what it's running as it's going.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Hughmoris posted:

This is just practicing on personal projects, the bigger one becoming more familiar with AWS in hopes of landing a gig. Seems like a nice tool to have in the toolkit, even if the knowledge is basic.


https://tldp.org/LDP/Bash-Beginners-Guide/Bash-Beginners-Guide.pdf

https://tldp.org/LDP/abs/abs-guide.pdf

Hughmoris
Apr 21, 2007
Let's go to the abyss!

minato posted:

If you're using Bash, I recommend running shellcheck on your scripts which will both highlight any potential issues and also teach a lot of best practices, with good explanations about why they exist.

Some random tips:
- all vars are global by default. It's common to use the convention UPPERCASE for globals, lowercase for locals.
- use `[[ ]]` for conditionals vs `[ ]`, it's more powerful (but less portable)
- use `set -o errexit -o xtrace` at the top, this will both exit as soon as an error is encountered, and print out what it's running as it's going.


Thanks!

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!


More random tips:

https://sipb.mit.edu/doc/safe-shell/

https://arslan.io/2019/07/03/how-to-write-idempotent-bash-scripts/

RFC2324
Jun 7, 2012

http 418

As someone who does basically everything in bash, I suggest learning python instead.

Its way easier to do anything complex, and way more powerful. You don't find yourself doing dumb crap like echoing a string inside a sub process to populate a variable because you aren't dealing with bashisms.

Bash is good and all, but python is 100x better

xzzy
Mar 5, 2009

minato posted:

If you're using Bash, I recommend running shellcheck on your scripts which will both highlight any potential issues and also teach a lot of best practices, with good explanations about why they exist.

Some random tips:
- all vars are global by default. It's common to use the convention UPPERCASE for globals, lowercase for locals.
- use `[[ ]]` for conditionals vs `[ ]`, it's more powerful (but less portable)
- use `set -o errexit -o xtrace` at the top, this will both exit as soon as an error is encountered, and print out what it's running as it's going.

Use $() for command substitutions instead of backticks (``). I find it much more readable.

Read up on while loops, they run in subshells and can trip people up because it means variables from inside/outside the loop will be out of scope.

edit - and yes, once a bash script is over about 20 lines, you should rethink your approach and do it in python. Shell can do complex things, but it's much better if they're limited to extremely simple tasks.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Bob Morales posted:

Just use Python. Any reason you want to try to do that with bash?

FWIW I basically got my new job by being able to read one of their bash scripts and telling them exactly what it did in an interview. So learning it is useful.

(it looked like half of it was cut/paste from the same place that whoever created half the nagios checks at a previous job of mine found when they google'd)

RFC2324
Jun 7, 2012

http 418

xzzy posted:


edit - and yes, once a bash script is over about 20 lines, you should rethink your approach and do it in python. Shell can do complex things, but it's much better if they're limited to extremely simple tasks.

I just wrote a 300 line bash script to automate my roles most common boring task :negative:

I just can't get python logic to stick, bash ruined me

ExcessBLarg!
Sep 1, 2001

Kevin Bacon posted:

debian is too outdated
Debian has been making stable releases every two years. If that's still too outdated (which maybe a fair complaint on a recent laptop) there's "unstable" which is effectively a rolling-release no worse than Arch, and "testing" which is a week behind unstable and blocks package upgrades with critical bug reports.

Kevin Bacon posted:

ubuntu is too weird/absolutely proprietary
It's not really anymore weird/proprietary compared to Debian or even Fedora now. The default desktop is GNOME/, init is systemd, and the display system is Wayland. Gone are Unity, Upstart, and Mir. There's also the Ubuntu-derivates (Kubuntu/Xubuntu/Lubuntu) if you want something different or even more traditional.

ExcessBLarg!
Sep 1, 2001
Probably a quarter of my professional programming is Bash scripts. Agree with "set -ex". One thing I'd recommend is getting familiar with quoting rules and consistently quoting when using variables in commands (or if you're not, why you're not). It's pretty rare for Bash scripts to deal with file names that have spaces in them, but it's always good to write scripts that do tolerate file names with spaces and other special characters correctly.

Bonus points: Learn the difference between "$*" and "$@", or their equivalents for arrays ("${arr[*]}" vs. "${arr[@]}".

Regarding Python: My professional opinion is that it's a garbage language that's better (but not necessarily more convenient) than shell scripting or Perl, generally "good enough" to have gotten really popular, but worse than Ruby in every metric aside maybe from third-party library support and general popularity. I don't write much Python, but every time I do I'm reminded why it's trash.

BlankSystemDaemon
Mar 13, 2009



Remember, the smartest possible thing you could possibly do is curl a URI of untrusted code and pipe it into your shell, especially if you run it with sudo.
All the cool kids are doing it, so you should too!

RFC2324
Jun 7, 2012

http 418

BlankSystemDaemon posted:

Remember, the smartest possible thing you could possibly do is curl a URI of untrusted code and pipe it into your shell, especially if you run it with sudo.
All the cool kids are doing it, so you should too!

Just lol if your scripts dont check if NOPASSWD is set and just sudo themselves up

Methanar
Sep 26, 2013

by the sex ghost
https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/

Adbot
ADBOT LOVES YOU

Rojo_Sombrero
May 8, 2006
I ebayed my EQ account and all I got was an SA account

Kevin Bacon posted:

oh yeah thats an important detail. its my laptop so when im home its honestly pretty much a browser/youtube machine, and when im traveling with it i tend to want to play some games on it too. so nvidia hybrid graphics compatability, general day-to-day stability (im not against troubleshooting and tinkering but there comes a point where i just want to watch youtube videos in the couch instead of trying to figure out why pipewire is broken or why my wm is suddenly crashing - which i guess makes arch not exactly ideal, but does snapper/timeshift mitigate this in any meaningful way?) is something i value. but i also want that perpetual new nvidia driver that is supposed to make everything work good now rather than later, which probably puts me into having a cake and eating it too territory

I don't understand the aversion to Mint. It's been a good daily driver for my HP laptop so far. I can do some light gaming ie: MTGA and WoT without issue. It works rather well out of the box.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply