|
F_Shit_Fitzgerald posted:So setting shopt -s nullglob will prevent a SNAFU where it starts deleting all jpg files in my system, if I'm understanding correctly? Will do; that's an easy fix. I should have already done that. Another save guard is to use full paths, 'mv /home/me/Pictures/temp/*.jpg /home/me/Media/'.
|
# ? Mar 23, 2024 13:02 |
|
|
# ? May 9, 2024 23:42 |
|
Alternatively, use findcode:
|
# ? Mar 23, 2024 13:13 |
|
Pablo Bluth posted:Alternatively, use find Of course that has the danger that if the 'cd' fails, in what directory will the find run. The easy option is to run this as 'cd /path/... && find . iname '. The other option is to include the option 'set -o errexit' in your script.
|
# ? Mar 23, 2024 13:33 |
|
Hardcoding the paths in find is probably the 'correct style'.code:
code:
Pablo Bluth fucked around with this message at 13:46 on Mar 23, 2024 |
# ? Mar 23, 2024 13:36 |
|
Yeah, scripts should probably try to use bash strict mode.
|
# ? Mar 23, 2024 13:50 |
|
Use them ampersands. cd /tmp/asldjkfalsdf && find -name foo (but yes, it's better to use the full path as an argument to find, it'll do the same error checking) I'm no addict to making stupidly complex one liners but if statements in bash are so ugly and cumbersome that I'll use && unless I need to do extra logic inside the conditional.
|
# ? Mar 23, 2024 14:03 |
|
My name is Pablo and I am a Find addict.
|
# ? Mar 23, 2024 14:07 |
|
Pablo Bluth posted:But cd failing is a good point. You can never have too many checks in a script. (well you probably can, but I suspect nearly all scripts don't have enough and hence many have dangerous failure modes lurking in them) See also: someone made a theme for KDE 6 that had some internal scripting and did a whoopsie with rm -rf, which deleted someone's filesystem. And now there is big drama on the KDE reddit as people suddenly discover that the DE made for anything-goes tweakers might not be vetting the giant pile of user-submitted themes and geegaws. And that global themes which can change your UI into a fully-functional LCARS replica might be, shock and horror, running code!
|
# ? Mar 23, 2024 14:46 |
|
I've loaded the arch/manjaro hibernate/suspend system.d service page 5 times in the past couple of months and been unable to parse it at all. Hibernation is such a useful feature for laptops. Why do I need to check the kernel and like five different places (exaggeration) to make it work.
|
# ? Mar 23, 2024 14:49 |
|
I think there's basically no legit use case for a bash script that doesn't set at least -e, -u and probably pipefail. If you do actually need to run a command that might fail and still continue, there are ways to write that that clearly shows intent and doesn't ruin everything else. Same for potentially unset variables. Always do set -eu, it will definitely save your rear end some day if you write more than ten lines of bash in your life.
|
# ? Mar 23, 2024 15:22 |
|
xzzy posted:Use them ampersands. I generally prefer cd /whatever || exit 1 "Unable to cd to /whatever". It seems more readable, though of course that's idiosyncratic.
|
# ? Mar 23, 2024 15:51 |
|
Pablo Bluth posted:
Testing on $? is error prone, you can branch on the return status directly code:
code:
|
# ? Mar 23, 2024 16:00 |
|
Here's another one I'm almost embarrassed to ask: I just got an external hard drive for backups that uses NTFS. Online searching yields somewhat conflicting advice about whether I should format this drive (possibly to ext3 or 4) before I use it on Mint, or whether my data* should be fine. I have been interchangeably using USB sticks between Mac, Linux and Windows with no apparent issues, so my instinct is not to worry about it. I thought I'd ask this thread before I did anything (the drive is hooked in but not mounted). * Stuff like my Music directory and various videos I don't necessarily want cluttering up my Linux machine. F_Shit_Fitzgerald fucked around with this message at 17:10 on Mar 23, 2024 |
# ? Mar 23, 2024 17:05 |
|
it doesn't really matter for back ups afaik. it just is read slower by Linux so you don't wanna be trying to run games off a NTFS drive that's the overall view I got out of my own searches last time I looked at least. happy to be corrected
|
# ? Mar 23, 2024 17:21 |
|
F_Shit_Fitzgerald posted:I'm trying to set up a shell script so that if it detects any files of a certain type, those are moved to a separate directory. For example: The way I usually deal with this sort of thing is: code:
|
# ? Mar 23, 2024 17:24 |
|
F_Shit_Fitzgerald posted:Here's another one I'm almost embarrassed to ask: I just got an external hard drive for backups that uses NTFS. Online searching yields somewhat conflicting advice about whether I should format this drive (possibly to ext3 or 4) before I use it on Mint, or whether my data* should be fine. I have been interchangeably using thumb sticks between Mac, Linux and Windows with no apparent issues, so my instinct is not to worry about it. I thought I'd ask this thread before I did anything (the drive is hooked in but not mounted). NTFS support is very good these days. AFAIK it's not as "fast" as other linux-native filesystems but for an external HDD that's a pointless comparison. If you want to keep using it as a backup drive, and also want it to be usable in other PCs with Windows, leaving it as NTFS is fine. There are a few potential problems (ex linux allows you to use characters in filenames that windows doesn't allow) but not many. And linux just totally ignores the NTFS permissions. If this drive will be 100% dedicated to linux, a linux-native FS has some advantages like ownership & permissions, or maybe btrfs with checksums for integrity. (I kept using NTFS on my backup drive for several months during my switch from windows to linux, as the emergency abort parachute. Then I switched to btrfs when I figured out send|receive for backups.)
|
# ? Mar 23, 2024 17:26 |
|
Klyith posted:NTFS support is very good these days. AFAIK it's not as "fast" as other linux-native filesystems but for an external HDD that's a pointless comparison. ziasquinn posted:it doesn't really matter for back ups afaik. it just is read slower by Linux so you don't wanna be trying to run games off a NTFS drive Cool; thanks! I decided to just go ahead and reformat to ext4 and save myself the trouble of chown'ing files from root->my_username. Using disks on Mint it was extremely easy, thank god. cruft posted:The way I usually deal with this sort of thing is: Huh. OK, I'll try that. Thanks!
|
# ? Mar 23, 2024 17:43 |
|
Also also, maybe overkill for home use but if you ever write a bash script at work (hello half my career), do yourself a favour and run shellcheck. It will catch, and tell you how to fix, basically every common error known to man.
|
# ? Mar 23, 2024 17:50 |
|
Phosphine posted:I think there's basically no legit use case for a bash script that doesn't set at least -e, -u and probably pipefail. If you do actually need to run a command that might fail and still continue, there are ways to write that that clearly shows intent and doesn't ruin everything else. Same for potentially unset variables. Always do set -eu, it will definitely save your rear end some day if you write more than ten lines of bash in your life. -u sucks, it breaks $_, and -e sucks for dying silently unless you set a trap. just catch all the errors with || .
|
# ? Mar 23, 2024 18:37 |
|
Phosphine posted:bash script Am I the only one still alive who cares about scripts that run with Bourne Shell (dash for Linux people)? Paging BlankSystemDaemon
|
# ? Mar 23, 2024 18:58 |
|
We should all be using Perl for our scripting needs....
|
# ? Mar 23, 2024 19:05 |
|
cruft posted:The way I usually deal with this sort of thing is: the safest way to do this is code:
E: and if you’re moving it to network storage or something else I/O bound, running the commands in parallel with -P can speed things up without having to manage background jobs and waiting for everything and all those headaches Subjunctive fucked around with this message at 19:10 on Mar 23, 2024 |
# ? Mar 23, 2024 19:06 |
|
Subjunctive posted:the safest way to do this is why not code:
code:
Subjunctive posted:E: and if you’re moving it to network storage or something else I/O bound, running the commands in parallel with -P can speed things up without having to manage background jobs and waiting for everything and all those headaches But if you're moving across filesystems onto a FAT, running in parallel will increase fragmentation and slow down read times! cruft fucked around with this message at 19:13 on Mar 23, 2024 |
# ? Mar 23, 2024 19:09 |
|
In closing, OP, you should just write it in Python.
|
# ? Mar 23, 2024 19:11 |
|
cruft posted:In closing, OP, you should just write it in Python. code:
|
# ? Mar 23, 2024 19:15 |
|
cruft posted:why not because you’ll run a mv for each file instead of batching them; probably doesn’t matter here but in general is helpful if the program does more per invocation (like a new connection for scp) it’s also easier to adapt my form to using an existing file of filenames, which I have often found useful when I want to filter out some part of the list using grep (I can never remember the proper way to use -a/-o/-!)
|
# ? Mar 23, 2024 19:16 |
|
Subjunctive posted:because you’ll run a mv for each file instead of batching them; probably doesn’t matter here but in general is helpful if the program does more per invocation (like a new connection for scp) LOL, okay, you're not wrong. I was going to call you out for splitting hairs until I re-read my posts on this page. The Linux Questions Thread: Ultimate Bikeshedding
|
# ? Mar 23, 2024 19:18 |
|
cruft posted:LOL, okay, you're not wrong. I was going to call you out for splitting hairs until I re-read my posts on this page. we learned these lessons the hard way, one mutilated directory tree or flooded /var/spool/mail/root at a time! we should pass them on!
|
# ? Mar 23, 2024 19:21 |
|
I use parallel for anything more complicated than picking my nose
|
# ? Mar 23, 2024 20:04 |
|
cruft posted:Am I the only one still alive who cares about scripts that run with Bourne Shell (dash for Linux people)? No, tons of projects in GitHub that proudly advertise 100% POSIX compatibility. I have to use POSIX for some container stuff. Shellcheck is the way to go.
|
# ? Mar 23, 2024 20:59 |
|
I feel ashamed of this. I know my network setup is stupid, and there are better ways to do it. But I dont wanna redo it and if it works it works I guess? Sure curl didn't work, but I found another stupid solution that didn't require me redoing the whole thing. I just installed tinyproxy, made it only accessible from the local machine, and then curl --proxy "http://127.0.0.1:8888" "https://api.ipify.org" then suddenly it works for mysterious reasons? I don't know why tinyproxy works on the same interface but curl doesnt but, oh well, everything works now even if it is stuck together with duct tape and chewed gum. The script works, my DNS stuff gets updated every time my ISP changes my IP, the jellyfin server works, I have actual reliable uptime now that isn't dependent on me manually updating my dns entries. I dunno, it's not stupid if it works, right? The one thing I need to figure out though, it's minor, there is a rule that port 8096 always goes through the no-vpn interface. That way everyone can access my jellyfin server. Also though, I got a friend in another country running his own jellyfin server I want to connect to. That needs to go through the vpn interface. It doesn't though, because port 8096. I could change the port mine is on but people already got it bookmarked. I'm not sure what to do here though to be able to use the vpn interface to connect to his jellyfin server.
|
# ? Mar 24, 2024 10:01 |
|
https://arstechnica.com/security/2024/03/backdoor-found-in-widely-used-linux-utility-breaks-encrypted-ssh-connections/ backdoor added by xz contributor with 2+ years of contributions that sucks
|
# ? Mar 29, 2024 21:54 |
|
Redhat gave the CVE a 10.0 score too.. so yeah, patch. The fact that it was discovered before the latest release trickled far and wide will really limit the damage though.
|
# ? Mar 29, 2024 21:59 |
|
xzzy posted:Redhat gave the CVE a 10.0 score too.. so yeah, patch. Yeah, Debian stable is on *checks system* ... 5.4.1. Yay?
|
# ? Mar 29, 2024 22:10 |
|
"There are no known reports of those versions being incorporated into any production releases for major Linux distributions, but both Red Hat and Debian reported that recently published beta releases used at least one of the backdoored versions—specifically, in Fedora 40 and Fedora Rawhide and Debian testing, unstable and experimental distributions. A stable release of Arch Linux is also affected. That distribution, however, isn't used in production systems." At least everything isn't on fire this time. Also, lol@the Arch statement. I know there's at least one flogging it to management in every org.
|
# ? Mar 29, 2024 22:47 |
|
AlexDeGruven posted:At least everything isn't on fire this time. Amen. I work in incident response. This scenario keeps me awake at night. I'm really worried about the one nobody's found yet.
|
# ? Mar 29, 2024 22:59 |
|
For fellow users of arch-likes, if you have a default setup for the pacman cache, you can downgrade like so:code:
OTOH if you don't have sshd running, I don't think there is anything to worry about? So you have to be in the small intersection of people who: 1) use an unstable / rolling release distro 2) have enabled sshd 3) be passing ssh through your home firewall oh god why to be critically vulnerable. AlexDeGruven posted:Also, lol@the Arch statement. I know there's at least one flogging it to management in every org. oh god why
|
# ? Mar 29, 2024 23:00 |
|
jkq posted:Yeah, Debian stable is on *checks system* ... 5.4.1. Yay? sid is now on liblzma5:amd64 5.6.1+really5.4.5-1 lmao
|
# ? Mar 29, 2024 23:06 |
|
Fedora 40 just downgraded xz, though from the original report it probably wasn't vulnerable. But you never know.pre:xz x86_64 1:5.4.6-3.fc40 updates-testing 2.0 MiB replacing xz x86_64 5.6.0-3.fc40 updates-testing 2.1 MiB xz-devel x86_64 1:5.4.6-3.fc40 updates-testing 255.8 KiB replacing xz-devel x86_64 5.6.0-3.fc40 updates-testing 255.7 KiB xz-libs i686 1:5.4.6-3.fc40 updates-testing 229.2 KiB replacing xz-libs i686 5.6.0-3.fc40 updates-testing 230.5 KiB xz-libs x86_64 1:5.4.6-3.fc40 updates-testing 209.8 KiB replacing xz-libs x86_64 5.6.0-3.fc40 updates-testing 211.1 KiB
|
# ? Mar 29, 2024 23:09 |
|
|
# ? May 9, 2024 23:42 |
|
cruft posted:Amen. Keep looking!
|
# ? Mar 29, 2024 23:10 |