Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Kevin Bacon
Sep 22, 2010

Rojo_Sombrero posted:

I don't understand the aversion to Mint. It's been a good daily driver for my HP laptop so far. I can do some light gaming ie: MTGA and WoT without issue. It works rather well out of the box.

it is entirely irrational and probably purely based on my tendency to overcomplicate things. i might very well end up on mint or some other debian based distro after banging my head against an arch based distro or suse for a while

Adbot
ADBOT LOVES YOU

CaptainSarcastic
Jul 6, 2013



Kevin Bacon posted:

dont know what the etiquette around distro recommendations are in this thread, but....


im maybe not extremely up to date, but my impression is that arch based distros tend to break too often, debian is too outdated, ubuntu is too weird/absolutely proprietary, then theres mint/popos/zorin which i have a weird aversion towards that i cant explain

ive used opensuse tw before which i really like and is my first choice. i wanted to use fedora, but i have to manually patch the kernel to enable ec_sys functionality for proper fan control on my laptop which is not ideal. opensuse was stable which i like, but it did start to break probably mainly due to laptop nvidia hybrid graphics. easily fixed with snapper, but then im thinking if im gonna have to rely on snapper for stability, is there any reason then not to just go for an arch based distro outside of aur repo security concerns?

In my experience running Tumbleweed the issues with Nvidia drivers mostly happened when they got out of step with kernel updates, so would put them off (if I remembered) until I saw a kernel update go through or otherwise verified they didn't cause problems.

Because of that and getting tired of the constant updates on Tumbleweed I ended up going back to the standard Leap spin of OpenSUSE, and haven't had any issues with Nvidia since. Doing version updates has been smoother and smoother over the years in my experience, and since I keep /home as a separate partition even a reinstall is pretty painless nowadays.

Volguus
Mar 3, 2009

Kevin Bacon posted:

oh yeah thats an important detail. its my laptop so when im home its honestly pretty much a browser/youtube machine, and when im traveling with it i tend to want to play some games on it too. so nvidia hybrid graphics compatability, general day-to-day stability (im not against troubleshooting and tinkering but there comes a point where i just want to watch youtube videos in the couch instead of trying to figure out why pipewire is broken or why my wm is suddenly crashing - which i guess makes arch not exactly ideal, but does snapper/timeshift mitigate this in any meaningful way?) is something i value. but i also want that perpetual new nvidia driver that is supposed to make everything work good now rather than later, which probably puts me into having a cake and eating it too territory

My own personal opinion: Fedora has managed to strike the perfect balance between stability and having new enough packages. However, unlike Ubuntu or Debian or whoever, they do not come with nvidia drivers out of the box (nor media codecs, the good ones). That's easily fixed by adding RPMFusion and installing the appropriate packages. After trying Arch for a few months some years ago, I don't think anyone there tests anything, it's just pushed to live. Tumbleweed actually seemed better than Arch on the stability side, but still .. it was rough at times (though I last tried Tumbleweed 3-4 years ago, may have improved since).

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I love Fedora. It's been my daily driver for near a year now. Very stable, reliable and up to date.

I do wish I didn't have to restart to update (using the gnome software tool, you can do it live via command line) but eh, it's just gotten me to switch over more stuff to flatpak so it can do live patching.

Volguus
Mar 3, 2009

Nitrousoxide posted:

I love Fedora. It's been my daily driver for near a year now. Very stable, reliable and up to date.

I do wish I didn't have to restart to update (using the gnome software tool, you can do it live via command line) but eh, it's just gotten me to switch over more stuff to flatpak so it can do live patching.

I personally do everything in my power to not use flatpak or snap. If I have to (for a particular package), I have to, but if I don't, hell no.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Volguus posted:

I personally do everything in my power to not use flatpak or snap. If I have to (for a particular package), I have to, but if I don't, hell no.

Why avoid flatpak? I also avoid snap (don't even have it installed).

Volguus
Mar 3, 2009

Nitrousoxide posted:

Why avoid flatpak? I also avoid snap (don't even have it installed).

A native package that it is using the installed libraries on my distro is preferred to one that runs in ... whatever flatpak runs in. But, if I have no way of running said package natively (a proprietary application that only has a flatpak for example), then sure. But it is the last resort and I have to really want to run that application.

BlankSystemDaemon
Mar 13, 2009



That's a damned impressive bit of fuckery.

H110Hawk
Dec 28, 2006

Twerk from Home posted:

How much can I trust the OS to do a good job scheduling tasks that use fewer threads than a single NUMA node has on a single NUMA node?

I have a whole host of tools that recommend being run on a single NUMA node, and a varied topology of servers to run them on: https://github.com/bwa-mem2/bwa-mem2

I'm hoping that if I configure it to run with a number of threads less than the smallest NUMA node in the cluster, the OS will do a good job of running this in a sane way. Is this a good bet, or should I expect to use numactl or libnuma in order to badger this thing into sticking to one NUMA node?

To be more specific: We are on a pretty recent kernel, all of the machines are Ubuntu 20.04, and if NUMA support is something that's improving rapidly we could hop to the hardware enablement kernel.

I don't know if you got an answer here but you cannot. You should wrap it in numactl etc to pin it to a single node. Also make sure to benchmark with and without it in a typically loaded machine.

Penisaurus Sex
Feb 3, 2009

asdfghjklpoiuyt

Kevin Bacon posted:

dont know what the etiquette around distro recommendations are in this thread, but....


im maybe not extremely up to date, but my impression is that arch based distros tend to break too often, debian is too outdated, ubuntu is too weird/absolutely proprietary, then theres mint/popos/zorin which i have a weird aversion towards that i cant explain

ive used opensuse tw before which i really like and is my first choice. i wanted to use fedora, but i have to manually patch the kernel to enable ec_sys functionality for proper fan control on my laptop which is not ideal. opensuse was stable which i like, but it did start to break probably mainly due to laptop nvidia hybrid graphics. easily fixed with snapper, but then im thinking if im gonna have to rely on snapper for stability, is there any reason then not to just go for an arch based distro outside of aur repo security concerns?

I am just me but I've been using Solus as my primary desktop OS for about 3 years and have had very few issues, may be worth spinning up a VM and trying it out?

Biggest issue is that the breadth of official package support isn't where Ubuntu/Fedora/AUR especially are at, but that might not effect you depending on what you need.

BlankSystemDaemon
Mar 13, 2009



H110Hawk posted:

I don't know if you got an answer here but you cannot. You should wrap it in numactl etc to pin it to a single node. Also make sure to benchmark with and without it in a typically loaded machine.
I'm sure I'm reading this wrong, but are you saying that CFS is not NUMA aware, and that it will move running processes across NUMA boundaries?

Sir Bobert Fishbone
Jan 16, 2006

Beebort
I could swear I had this working at one point a couple years back, but is there any way to get an RDP server working on Linux that allows me to connect to an existing but maybe locked session, the same as I'd be able to do in Windows? xrdp seems to work, but only if the local session is completely logged out, otherwise you just get a black screen. GNOME's built-in rdp implementation looks like it might work on an existing session but only if it's explicitly not already locked. I have memories of having this work the way I want, probably on some version of Debian, a few years ago, but either I'm misremembering or this has become impossible.

Volguus
Mar 3, 2009

Sir Bobert Fishbone posted:

I could swear I had this working at one point a couple years back, but is there any way to get an RDP server working on Linux that allows me to connect to an existing but maybe locked session, the same as I'd be able to do in Windows? xrdp seems to work, but only if the local session is completely logged out, otherwise you just get a black screen. GNOME's built-in rdp implementation looks like it might work on an existing session but only if it's explicitly not already locked. I have memories of having this work the way I want, probably on some version of Debian, a few years ago, but either I'm misremembering or this has become impossible.

I'm not aware of any RDP server (there may be, I just don't know any), but for VNC there is x11vnc which does not start a new session when you connect to it, but just displays and allows one to interact with the existing X11 display. It should allow one to unlock a locked session as well.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Didn't get any nibbles in the web dev thread so I thought I'd try here -

I'm having some trouble with using different auth_basic_user_file for separate location blocks, as soon as I log in to something.com/location2 it starts using that authentication for something.com/location1

Is something like that possible or do I need to use different subdomains?

Here's what I've got:

code:
    location /location1 {
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/htpasswd1;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Host $http_post;
        proxy_set_header Host $http_host;

        proxy_pass http://backend1.docker-network;
    }

    location /location2 {
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/htpasswd2;
        
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Host $http_post;
        proxy_set_header Host $http_host;
        proxy_max_temp_file_size 0;

        proxy_pass http://backend2.docker-network:1234/app;
        proxy_redirect http:// https://;
    }

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

fletcher posted:

Didn't get any nibbles in the web dev thread so I thought I'd try here -

I'm having some trouble with using different auth_basic_user_file for separate location blocks, as soon as I log in to something.com/location2 it starts using that authentication for something.com/location1

Is something like that possible or do I need to use different subdomains?

Here's what I've got:

code:
    location /location1 {
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/htpasswd1;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Host $http_post;
        proxy_set_header Host $http_host;

        proxy_pass http://backend1.docker-network;
    }

    location /location2 {
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/htpasswd2;
        
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Host $http_post;
        proxy_set_header Host $http_host;
        proxy_max_temp_file_size 0;

        proxy_pass http://backend2.docker-network:1234/app;
        proxy_redirect http:// https://;
    }

If I understand you correctly that sounds like a browser issue rather than an nginx one, no? Multiple paths on the same domain requiring different basic auth credentials might not be a supported use case.

spiritual bypass
Feb 19, 2008

Grimey Drawer
Right, the browsers match auth locations by domain. It doesn't know about the per path setup you have on the server and there isn't a header that can communicate that afaik

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
If my system takes minutes to shut down and says: "A stop job is running for USER MANAGER for UID 1000 (2min / 2min). How do I find out which process it is?

In this specific case I already do know that it is a crashed wine instance because it only happens after I use a specific wine program. But are there other ways to find the problem job?
Similarly is there a way to read the shutdown messages in a log instead of taking photographs of the screen?

RFC2324
Jun 7, 2012

http 418

VictualSquid posted:

If my system takes minutes to shut down and says: "A stop job is running for USER MANAGER for UID 1000 (2min / 2min). How do I find out which process it is?

In this specific case I already do know that it is a crashed wine instance because it only happens after I use a specific wine program. But are there other ways to find the problem job?
Similarly is there a way to read the shutdown messages in a log instead of taking photographs of the screen?

You can read old log messages in journalctl, as long as its set to store logs long term(this is default behavior). I think you can just use -b 1 go back to the previous boot, or just give it explicit time ranges.

E: late caveat to this - depending on how it crashed it might not be able to flush the logs to disk, so you still want to see whats on the console if you can

RFC2324 fucked around with this message at 19:29 on Mar 3, 2022

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Keito posted:

If I understand you correctly that sounds like a browser issue rather than an nginx one, no? Multiple paths on the same domain requiring different basic auth credentials might not be a supported use case.


cum jabbar posted:

Right, the browsers match auth locations by domain. It doesn't know about the per path setup you have on the server and there isn't a header that can communicate that afaik

Ahh dang, that makes sense. Thanks for confirming. I'll have to switch to using different subdomains then.

Methanar
Sep 26, 2013

by the sex ghost
I've spent the last 2 days reading and watching videos about implementation internal details of kernel CPU scheduling, context switching, cache locality, interrupt handling, conntrack and generally the whole kernel network stack with wacko epbf.

All with the goal of understanding I'm seeing garbage performance on so many things. Something is wrong, but I don't know what.
idk what I'm doing. Even if I do understand (whatever that means) this stuff, I'm not sure how it could possibly be actionable to me. I'm not about to go rewrite the kernel to fix anything. The best case is that I just keep throwing money at the problem or change things up entirely without understanding why. Really the same I was doing before.

The only thing I'm sure about any more is my own ignorance.

Methanar fucked around with this message at 02:16 on Mar 7, 2022

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Methanar posted:

I've spent the last 2 days reading and watching videos about implementation internal details of kernel CPU scheduling, context switching, cache locality, interrupt handling, conntrack and generally the whole kernel network stack with wacko epbf.

All with the goal of understanding I'm seeing garbage performance on so many things. Something is wrong, but I don't know what.
idk what I'm doing. Even if I do understand (whatever that means) this stuff, I'm not sure how it could possibly be actionable to me. I'm not about to go rewrite the kernel to fix anything. The best case is that I just keep throwing money at the problem or change things up entirely without understanding why. Really the same I was doing before.

The only thing I'm sure about any more is my own ignorance.

https://www.brendangregg.com

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



My computer is dying from a hardware failure. Going to replace it with a new 12 gen intel chip. Here's to hoping the 5.16 kernel actually fixed the scheduling issues with the e-cores on that chipset.

Also going to switch to an immutable OS, Silverblue, with this new system since all of my software I use is now available as a flatpak.

Guess we'll see if this is ready for prime time.

Methanar
Sep 26, 2013

by the sex ghost

A flame graph taken on Friday while responding to an incident telling me that ksoftirq is causing me problems was the prompt for a lot of this.

I'll spare you the 5000 word explanation of why things suck.

JLaw
Feb 10, 2008

- harmless -
Hmm. So my old 1440p/60Hz Catleap monitor finally went on the fritz, and I got a Gigabyte M27Q 1440p/144Hz monitor to replace it. Partly because of some glowing reviews about how well it works w/ Linux.

However, the display settings on my system (elementary OS 5, based on Ubuntu 18, using NVidia proprietary drivers) don't show any available resolution above 2048x1152. Also no refresh rate above 60Hz.

Is someone around here a Linux monitor resolutions ninja, by any chance?

Dumping out the EDID data for the monitor, 2560x1440 is certainly in there. There's some distinctions between "established timings" and "standard timings" and "detailed mode" and things in the "extension block", none of which I really understand.

I've tried the xrandr-based workflow for adding a custom mode that shows up in a lot of places around the web, both by using cvt to generate a modeline and also by reverse-engineering a modeline from the EDID data. But addmode fails with "X Error of failed request: BadMatch (invalid parameter attributes)". I probably need to step back and more generally understand what's going on here if that's at all possible.

==========

e: Oh wow, /var/log/Xorg.0.log says "225.0 MHz maximum pixel clock" for this monitor. That ain't right.

Apparently the NVidia proprietary driver has that cap for HDMI output. I was previously using the HDMI output on my card for this monitor, but currently I'm using DVI that then goes to an HDMI adapter, and xorg reports this as a DVI monitor... I wonder if NVidia could still be detecting that I'm going to an HDMI monitor port. Bleargh.

I guess this almost/kinda explains why the previous monitor worked, as it was DVI.

Tried using NoMaxPClkCheck in my xorg conf and that just made the monitor refuse to display anything.

e2: Monitor would also take DisplayPort input, but good luck finding a DVI->DP cable/adapter that supports 1440p. :-/

e3: Switching to nouveau drivers also makes monitor unresponsive, even though Xorg log says it's choosing 1920x1080 which it was previously perfectly happy at. I suspect that in this case (and maybe the NoMaxPClkCheck case as well) it was trying to run at 144Hz and that doesn't work for some reason.

Gonna call it a day I think. Linux! *jazz hands*

JLaw fucked around with this message at 00:38 on Mar 8, 2022

CaptainSarcastic
Jul 6, 2013



JLaw posted:

Hmm. So my old 1440p/60Hz Catleap monitor finally went on the fritz, and I got a Gigabyte M27Q 1440p/144Hz monitor to replace it. Partly because of some glowing reviews about how well it works w/ Linux.

However, the display settings on my system (elementary OS 5, based on Ubuntu 18, using NVidia proprietary drivers) don't show any available resolution above 2048x1152. Also no refresh rate above 60Hz.

Is someone around here a Linux monitor resolutions ninja, by any chance?

Dumping out the EDID data for the monitor, 2560x1440 is certainly in there. There's some distinctions between "established timings" and "standard timings" and "detailed mode" and things in the "extension block", none of which I really understand.

I've tried the xrandr-based workflow for adding a custom mode that shows up in a lot of places around the web, both by using cvt to generate a modeline and also by reverse-engineering a modeline from the EDID data. But addmode fails with "X Error of failed request: BadMatch (invalid parameter attributes)". I probably need to step back and more generally understand what's going on here if that's at all possible.

==========

e: Oh wow, /var/log/Xorg.0.log says "225.0 MHz maximum pixel clock" for this monitor. That ain't right.

Apparently the NVidia proprietary driver has that cap for HDMI output. I was previously using the HDMI output on my card for this monitor, but currently I'm using DVI that then goes to an HDMI adapter, and xorg reports this as a DVI monitor... I wonder if NVidia could still be detecting that I'm going to an HDMI monitor port. Bleargh.

I guess this almost/kinda explains why the previous monitor worked, as it was DVI.

Tried using NoMaxPClkCheck in my xorg conf and that just made the monitor refuse to display anything.

e2: Monitor would also take DisplayPort input, but good luck finding a DVI->DP cable/adapter that supports 1440p. :-/

What GPU do you have?

JLaw
Feb 10, 2008

- harmless -
Don't laugh, but it's a GeForce GTX 560 Ti.

This is one of those old systems where I'm afraid that upgrading things will lead to a domino effect of upgrading everything (as I may be finding out right now).

CaptainSarcastic
Jul 6, 2013



JLaw posted:

Don't laugh, but it's a GeForce GTX 560 Ti.

This is one of those old systems where I'm afraid that upgrading things will lead to a domino effect of upgrading everything (as I may be finding out right now).

From what I can gather you should be able to get the right resolution, although the FPS might be low.

This page describes some troubleshooting under similar circumstances:

https://www.reddit.com/r/techsupport/comments/2v15oo/gtx_560ti_not_showing_max_resolution_over_hdmi/

ExcessBLarg!
Sep 1, 2001

JLaw posted:

Don't laugh, but it's a GeForce GTX 560 Ti.
My guess is that it's a link bandwidth issue. I don't know if the 560 is DVI dual-link, but using a HDMI adapter with it is going to constrain it to single-link bandwidth. You might have better luck with the mini-HDMI port if it supports HDMI 1.3 bandwidth.

JLaw
Feb 10, 2008

- harmless -
The reddit link above is pretty Windows-oriented... I think the equivalent shenanigans with the Linux NVidia settings panel would involve setting ViewPortOut to the desired resolution which doesn't seem to be allowed here.

ExcessBLarg! posted:

My guess is that it's a link bandwidth issue. I don't know if the 560 is DVI dual-link, but using a HDMI adapter with it is going to constrain it to single-link bandwidth. You might have better luck with the mini-HDMI port if it supports HDMI 1.3 bandwidth.

Yah according the the card specs the HDMI port should be able to handle it. The adapter had reviews that claimed 1440p/60Hz should be OK, and I thought the specs did as well, but now I'm not sure & I see some other reviews complaining. The Xorg log still shows that the driver or someone is capping the pixel clock too low both when using the HDMI port (w/ HDMI 1.4 cable), and when using the DVI port going to that HDMI adapter -- I suppose there could be hilarious hijinks here with the pixel clock being capped for different reasons in those different cases.

In any case, setting NoMaxPClkCheck with the NVidia driver fails to have good results in both setups. Hmm. More verbose logging shows that it's choosing 1440p/60Hz there, no reason that shouldn't work. I suppose I should switch back to nouveau and try more things there but bleeaarrrgh.

Will leave this thread alone for a while now. :-)

ExcessBLarg!
Sep 1, 2001
A cursory glance of Amazon shows a bunch of DVI "dual-link" to HDMI adapters but I'm suspicious of all of them. DVI dual-link uses two transmitters which you can't just convert into a single transmitter at higher bandwidth--at least not with a passive or even a simple active adapter.

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS

ExcessBLarg! posted:

A cursory glance of Amazon shows a bunch of DVI "dual-link" to HDMI adapters but I'm suspicious of all of them. DVI dual-link uses two transmitters which you can't just convert into a single transmitter at higher bandwidth--at least not with a passive or even a simple active adapter.
Seconding your distrust - I don't think these work. My poor old dual-link dvi monitor sits unused because there is really no good way to use it with modern GPUs.

JLaw
Feb 10, 2008

- harmless -
Hah, victory!

OK for the record (and thanks to everyone for thought-provoking comments):

* Nouveau drivers. Possibly I wasn't even properly purging the NVidia drivers before, but also just switching to the nouveau drivers didn't do the trick alone.

* HDMI 1.4 cable.

* No extra xorg conf options.

* Setting the nouveau.hdmimhz kernel parameter to something, in grub. I picked 250 for now just to nudge it above the pixel clock value required for 1440p/60Hz.

Whew.

e: ...annnnd my display performance is apparently garbage now, even with just desktop stuff.

JLaw fucked around with this message at 19:13 on Mar 8, 2022

BlankSystemDaemon
Mar 13, 2009



Hey Linux folks, you want to update right the gently caress now. Or better yet, yesterday. :ohdear:

ExcessBLarg! posted:

A cursory glance of Amazon shows a bunch of DVI "dual-link" to HDMI adapters but I'm suspicious of all of them. DVI dual-link uses two transmitters which you can't just convert into a single transmitter at higher bandwidth--at least not with a passive or even a simple active adapter.
Yeah there's no way this works, even theoretically.

With companies lying about what their cables do both on the low end (with examples like this) and at the high end (with examples like monster cables etc), it's sometimes seems impossible to get the right thing, without paying a premium for someone to validate them.

Computer viking
May 30, 2011
Now with less breakage.

I think Linus Tech Tips invested in a cable tester a bit back and ran through a lot of hdmi and displayport cables, with very varying results. The only real positive was that all the displayport cables they bought (as opposed to getting bundled) matched or surpassed the specs they claimed to support.

I'd kind of love to see them add assorted adapters in the mix, if there's a way to make that work.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Is it possible for a userspace application to attach to a kernel module in a way that the kernel module can call back into the userspace application? We want to do a diagnostic application and send some information out for recording when there are specific issues, and I'd prefer to do it without polling.

BlankSystemDaemon
Mar 13, 2009



Isn't that what every form of tracing does?

It's certainly how dtrace works, because that's how OpenBSM and auditd is capable of doing full-system introspection.

Hed
Mar 31, 2004

Fun Shoe

Rocko Bonaparte posted:

Is it possible for a userspace application to attach to a kernel module in a way that the kernel module can call back into the userspace application? We want to do a diagnostic application and send some information out for recording when there are specific issues, and I'd prefer to do it without polling.

This sounds like it might be a job for eBPF? Check this paper, specifically section 4.2

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I feel like I could bend that stuff to my will, but I was aiming for something like a logging system but without a huge volume of messages. I want to collect some data from our lab machines running all kinds of modules and stuff. A particular example would be a message to note that something we worked on is actually being run.

I am thinking the existing tracing systems could be used so much as I could have the userspace thing tack on to a certain class of trace events to pass along the messages, but I am wouldn't know if these trace solutions allow for the kind of arbitrariness of these messages.

some kinda jackal
Feb 25, 2003

 
 
I have a really weird problem I'm trying to solve with hopefully built in tools..

Is there a way to take a folder, create a tar file of said folder, but have that tar file be limited to a specific filesize so it creates subsequent tar files if necessary, BUT (and this is the part I can't figure out) each tarfile itself is a fully standalone package, that is to say doesn't truncate or split files across tarfiles.

So I can do something like

tar --tape-length=90000 -M -c --file=mytar{1..99}.tar SOURCEFOLDER

and it'll create mytar1.tar, mytar2.tar, mytar3.tar, but the binaries which are caught at the tape limit are basically truncated and continue in the next.

My goal here is to be able to take each tar file as its own artifact independent of the others, so this kind of relies on tar saying "oh yeah ok this next file won't fit in my size limit so it'll go in the next tar". In my ideal scenario, I could take mytar1.tar and extract it without mytar2 or 3, or do the same with 2 without 1 and 3 and the only consequence would be missing files, but not missing in the sense that half of it exists in this tar and it failed to materialize because the other half isn't available. Essentially all files would live 100% in one tar or another, not on the boundary.

I'm articulating this beyond terribly but hopefully someone gets WHAT I'm trying to accomplish, if not why. The reason is really esoteric and specific to some tooling I have to conform to.

If the --tape-length or the -M(ultiarchive) operator has some other flag that I'm missing which would render each individual tar non-reliant on the others that would be the best case scenario but I'm going through the docs and I'm not sure that exists.


e: I'm also not married to tar command itself, just that the output HAS to be a tar file or series of tars so if there's something that'll do the job as a 3rd party tool I'm way ok with that too.


e2: OK I think this might do what I need: https://github.com/dmuth/tarsplit

some kinda jackal fucked around with this message at 18:24 on Mar 10, 2022

Adbot
ADBOT LOVES YOU

xzzy
Mar 5, 2009

I don't think you're going to be doing that with a tar one liner. Multiarchive in particular explicitly states that it requires all tar files to be available to get anything out of the archive. Absent a tool to do exactly what you're asking, I think a script that walks the directory tree and builds a file list conforming to your max size might work. You feed that as an argument to tar and away you go.

However that depends on how busy the tree you're archiving is.. writing your own backup tool means you need to take into account what happens if a file changes and that quickly puts you into "why did you do this to yourself" territory.

I did a some google searches and did find this project that seems to accomplish what you're asking:

https://github.com/dmuth/tarsplit

So maybe that's a place to start. Still comes with the "what if the filesystem changes" problem though.

edit - oops that's what I get for taking 30 minutes to type a post.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply