|
Rojo_Sombrero posted:I don't understand the aversion to Mint. It's been a good daily driver for my HP laptop so far. I can do some light gaming ie: MTGA and WoT without issue. It works rather well out of the box. it is entirely irrational and probably purely based on my tendency to overcomplicate things. i might very well end up on mint or some other debian based distro after banging my head against an arch based distro or suse for a while
|
# ? Feb 25, 2022 21:49 |
|
|
# ? May 18, 2024 03:08 |
|
Kevin Bacon posted:dont know what the etiquette around distro recommendations are in this thread, but.... In my experience running Tumbleweed the issues with Nvidia drivers mostly happened when they got out of step with kernel updates, so would put them off (if I remembered) until I saw a kernel update go through or otherwise verified they didn't cause problems. Because of that and getting tired of the constant updates on Tumbleweed I ended up going back to the standard Leap spin of OpenSUSE, and haven't had any issues with Nvidia since. Doing version updates has been smoother and smoother over the years in my experience, and since I keep /home as a separate partition even a reinstall is pretty painless nowadays.
|
# ? Feb 25, 2022 22:22 |
|
Kevin Bacon posted:oh yeah thats an important detail. its my laptop so when im home its honestly pretty much a browser/youtube machine, and when im traveling with it i tend to want to play some games on it too. so nvidia hybrid graphics compatability, general day-to-day stability (im not against troubleshooting and tinkering but there comes a point where i just want to watch youtube videos in the couch instead of trying to figure out why pipewire is broken or why my wm is suddenly crashing - which i guess makes arch not exactly ideal, but does snapper/timeshift mitigate this in any meaningful way?) is something i value. but i also want that perpetual new nvidia driver that is supposed to make everything work good now rather than later, which probably puts me into having a cake and eating it too territory My own personal opinion: Fedora has managed to strike the perfect balance between stability and having new enough packages. However, unlike Ubuntu or Debian or whoever, they do not come with nvidia drivers out of the box (nor media codecs, the good ones). That's easily fixed by adding RPMFusion and installing the appropriate packages. After trying Arch for a few months some years ago, I don't think anyone there tests anything, it's just pushed to live. Tumbleweed actually seemed better than Arch on the stability side, but still .. it was rough at times (though I last tried Tumbleweed 3-4 years ago, may have improved since).
|
# ? Feb 26, 2022 01:29 |
I love Fedora. It's been my daily driver for near a year now. Very stable, reliable and up to date. I do wish I didn't have to restart to update (using the gnome software tool, you can do it live via command line) but eh, it's just gotten me to switch over more stuff to flatpak so it can do live patching.
|
|
# ? Feb 26, 2022 01:34 |
|
Nitrousoxide posted:I love Fedora. It's been my daily driver for near a year now. Very stable, reliable and up to date. I personally do everything in my power to not use flatpak or snap. If I have to (for a particular package), I have to, but if I don't, hell no.
|
# ? Feb 26, 2022 01:42 |
Volguus posted:I personally do everything in my power to not use flatpak or snap. If I have to (for a particular package), I have to, but if I don't, hell no. Why avoid flatpak? I also avoid snap (don't even have it installed).
|
|
# ? Feb 26, 2022 01:44 |
|
Nitrousoxide posted:Why avoid flatpak? I also avoid snap (don't even have it installed). A native package that it is using the installed libraries on my distro is preferred to one that runs in ... whatever flatpak runs in. But, if I have no way of running said package natively (a proprietary application that only has a flatpak for example), then sure. But it is the last resort and I have to really want to run that application.
|
# ? Feb 26, 2022 01:49 |
That's a damned impressive bit of fuckery.
|
|
# ? Feb 26, 2022 15:12 |
|
Twerk from Home posted:How much can I trust the OS to do a good job scheduling tasks that use fewer threads than a single NUMA node has on a single NUMA node? I don't know if you got an answer here but you cannot. You should wrap it in numactl etc to pin it to a single node. Also make sure to benchmark with and without it in a typically loaded machine.
|
# ? Feb 28, 2022 20:24 |
|
Kevin Bacon posted:dont know what the etiquette around distro recommendations are in this thread, but.... I am just me but I've been using Solus as my primary desktop OS for about 3 years and have had very few issues, may be worth spinning up a VM and trying it out? Biggest issue is that the breadth of official package support isn't where Ubuntu/Fedora/AUR especially are at, but that might not effect you depending on what you need.
|
# ? Feb 28, 2022 21:23 |
H110Hawk posted:I don't know if you got an answer here but you cannot. You should wrap it in numactl etc to pin it to a single node. Also make sure to benchmark with and without it in a typically loaded machine.
|
|
# ? Mar 1, 2022 13:41 |
|
I could swear I had this working at one point a couple years back, but is there any way to get an RDP server working on Linux that allows me to connect to an existing but maybe locked session, the same as I'd be able to do in Windows? xrdp seems to work, but only if the local session is completely logged out, otherwise you just get a black screen. GNOME's built-in rdp implementation looks like it might work on an existing session but only if it's explicitly not already locked. I have memories of having this work the way I want, probably on some version of Debian, a few years ago, but either I'm misremembering or this has become impossible.
|
# ? Mar 2, 2022 04:05 |
|
Sir Bobert Fishbone posted:I could swear I had this working at one point a couple years back, but is there any way to get an RDP server working on Linux that allows me to connect to an existing but maybe locked session, the same as I'd be able to do in Windows? xrdp seems to work, but only if the local session is completely logged out, otherwise you just get a black screen. GNOME's built-in rdp implementation looks like it might work on an existing session but only if it's explicitly not already locked. I have memories of having this work the way I want, probably on some version of Debian, a few years ago, but either I'm misremembering or this has become impossible. I'm not aware of any RDP server (there may be, I just don't know any), but for VNC there is x11vnc which does not start a new session when you connect to it, but just displays and allows one to interact with the existing X11 display. It should allow one to unlock a locked session as well.
|
# ? Mar 2, 2022 04:38 |
Didn't get any nibbles in the web dev thread so I thought I'd try here - I'm having some trouble with using different auth_basic_user_file for separate location blocks, as soon as I log in to something.com/location2 it starts using that authentication for something.com/location1 Is something like that possible or do I need to use different subdomains? Here's what I've got: code:
|
|
# ? Mar 3, 2022 07:49 |
|
fletcher posted:Didn't get any nibbles in the web dev thread so I thought I'd try here - If I understand you correctly that sounds like a browser issue rather than an nginx one, no? Multiple paths on the same domain requiring different basic auth credentials might not be a supported use case.
|
# ? Mar 3, 2022 12:53 |
|
Right, the browsers match auth locations by domain. It doesn't know about the per path setup you have on the server and there isn't a header that can communicate that afaik
|
# ? Mar 3, 2022 14:13 |
|
If my system takes minutes to shut down and says: "A stop job is running for USER MANAGER for UID 1000 (2min / 2min). How do I find out which process it is? In this specific case I already do know that it is a crashed wine instance because it only happens after I use a specific wine program. But are there other ways to find the problem job? Similarly is there a way to read the shutdown messages in a log instead of taking photographs of the screen?
|
# ? Mar 3, 2022 16:54 |
|
VictualSquid posted:If my system takes minutes to shut down and says: "A stop job is running for USER MANAGER for UID 1000 (2min / 2min). How do I find out which process it is? You can read old log messages in journalctl, as long as its set to store logs long term(this is default behavior). I think you can just use -b 1 go back to the previous boot, or just give it explicit time ranges. E: late caveat to this - depending on how it crashed it might not be able to flush the logs to disk, so you still want to see whats on the console if you can RFC2324 fucked around with this message at 19:29 on Mar 3, 2022 |
# ? Mar 3, 2022 17:05 |
Keito posted:If I understand you correctly that sounds like a browser issue rather than an nginx one, no? Multiple paths on the same domain requiring different basic auth credentials might not be a supported use case. cum jabbar posted:Right, the browsers match auth locations by domain. It doesn't know about the per path setup you have on the server and there isn't a header that can communicate that afaik Ahh dang, that makes sense. Thanks for confirming. I'll have to switch to using different subdomains then.
|
|
# ? Mar 3, 2022 19:22 |
|
I've spent the last 2 days reading and watching videos about implementation internal details of kernel CPU scheduling, context switching, cache locality, interrupt handling, conntrack and generally the whole kernel network stack with wacko epbf. All with the goal of understanding I'm seeing garbage performance on so many things. Something is wrong, but I don't know what. idk what I'm doing. Even if I do understand (whatever that means) this stuff, I'm not sure how it could possibly be actionable to me. I'm not about to go rewrite the kernel to fix anything. The best case is that I just keep throwing money at the problem or change things up entirely without understanding why. Really the same I was doing before. The only thing I'm sure about any more is my own ignorance. Methanar fucked around with this message at 02:16 on Mar 7, 2022 |
# ? Mar 7, 2022 02:02 |
|
Methanar posted:I've spent the last 2 days reading and watching videos about implementation internal details of kernel CPU scheduling, context switching, cache locality, interrupt handling, conntrack and generally the whole kernel network stack with wacko epbf. https://www.brendangregg.com
|
# ? Mar 7, 2022 02:56 |
My computer is dying from a hardware failure. Going to replace it with a new 12 gen intel chip. Here's to hoping the 5.16 kernel actually fixed the scheduling issues with the e-cores on that chipset. Also going to switch to an immutable OS, Silverblue, with this new system since all of my software I use is now available as a flatpak. Guess we'll see if this is ready for prime time.
|
|
# ? Mar 7, 2022 02:59 |
|
A flame graph taken on Friday while responding to an incident telling me that ksoftirq is causing me problems was the prompt for a lot of this. I'll spare you the 5000 word explanation of why things suck.
|
# ? Mar 7, 2022 05:05 |
|
Hmm. So my old 1440p/60Hz Catleap monitor finally went on the fritz, and I got a Gigabyte M27Q 1440p/144Hz monitor to replace it. Partly because of some glowing reviews about how well it works w/ Linux. However, the display settings on my system (elementary OS 5, based on Ubuntu 18, using NVidia proprietary drivers) don't show any available resolution above 2048x1152. Also no refresh rate above 60Hz. Is someone around here a Linux monitor resolutions ninja, by any chance? Dumping out the EDID data for the monitor, 2560x1440 is certainly in there. There's some distinctions between "established timings" and "standard timings" and "detailed mode" and things in the "extension block", none of which I really understand. I've tried the xrandr-based workflow for adding a custom mode that shows up in a lot of places around the web, both by using cvt to generate a modeline and also by reverse-engineering a modeline from the EDID data. But addmode fails with "X Error of failed request: BadMatch (invalid parameter attributes)". I probably need to step back and more generally understand what's going on here if that's at all possible. ========== e: Oh wow, /var/log/Xorg.0.log says "225.0 MHz maximum pixel clock" for this monitor. That ain't right. Apparently the NVidia proprietary driver has that cap for HDMI output. I was previously using the HDMI output on my card for this monitor, but currently I'm using DVI that then goes to an HDMI adapter, and xorg reports this as a DVI monitor... I wonder if NVidia could still be detecting that I'm going to an HDMI monitor port. Bleargh. I guess this almost/kinda explains why the previous monitor worked, as it was DVI. Tried using NoMaxPClkCheck in my xorg conf and that just made the monitor refuse to display anything. e2: Monitor would also take DisplayPort input, but good luck finding a DVI->DP cable/adapter that supports 1440p. :-/ e3: Switching to nouveau drivers also makes monitor unresponsive, even though Xorg log says it's choosing 1920x1080 which it was previously perfectly happy at. I suspect that in this case (and maybe the NoMaxPClkCheck case as well) it was trying to run at 144Hz and that doesn't work for some reason. Gonna call it a day I think. Linux! *jazz hands* JLaw fucked around with this message at 00:38 on Mar 8, 2022 |
# ? Mar 7, 2022 23:12 |
|
JLaw posted:Hmm. So my old 1440p/60Hz Catleap monitor finally went on the fritz, and I got a Gigabyte M27Q 1440p/144Hz monitor to replace it. Partly because of some glowing reviews about how well it works w/ Linux. What GPU do you have?
|
# ? Mar 8, 2022 00:36 |
|
Don't laugh, but it's a GeForce GTX 560 Ti. This is one of those old systems where I'm afraid that upgrading things will lead to a domino effect of upgrading everything (as I may be finding out right now).
|
# ? Mar 8, 2022 00:51 |
|
JLaw posted:Don't laugh, but it's a GeForce GTX 560 Ti. From what I can gather you should be able to get the right resolution, although the FPS might be low. This page describes some troubleshooting under similar circumstances: https://www.reddit.com/r/techsupport/comments/2v15oo/gtx_560ti_not_showing_max_resolution_over_hdmi/
|
# ? Mar 8, 2022 01:41 |
|
JLaw posted:Don't laugh, but it's a GeForce GTX 560 Ti.
|
# ? Mar 8, 2022 02:01 |
|
The reddit link above is pretty Windows-oriented... I think the equivalent shenanigans with the Linux NVidia settings panel would involve setting ViewPortOut to the desired resolution which doesn't seem to be allowed here.ExcessBLarg! posted:My guess is that it's a link bandwidth issue. I don't know if the 560 is DVI dual-link, but using a HDMI adapter with it is going to constrain it to single-link bandwidth. You might have better luck with the mini-HDMI port if it supports HDMI 1.3 bandwidth. Yah according the the card specs the HDMI port should be able to handle it. The adapter had reviews that claimed 1440p/60Hz should be OK, and I thought the specs did as well, but now I'm not sure & I see some other reviews complaining. The Xorg log still shows that the driver or someone is capping the pixel clock too low both when using the HDMI port (w/ HDMI 1.4 cable), and when using the DVI port going to that HDMI adapter -- I suppose there could be hilarious hijinks here with the pixel clock being capped for different reasons in those different cases. In any case, setting NoMaxPClkCheck with the NVidia driver fails to have good results in both setups. Hmm. More verbose logging shows that it's choosing 1440p/60Hz there, no reason that shouldn't work. I suppose I should switch back to nouveau and try more things there but bleeaarrrgh. Will leave this thread alone for a while now. :-)
|
# ? Mar 8, 2022 03:06 |
|
A cursory glance of Amazon shows a bunch of DVI "dual-link" to HDMI adapters but I'm suspicious of all of them. DVI dual-link uses two transmitters which you can't just convert into a single transmitter at higher bandwidth--at least not with a passive or even a simple active adapter.
|
# ? Mar 8, 2022 03:30 |
|
ExcessBLarg! posted:A cursory glance of Amazon shows a bunch of DVI "dual-link" to HDMI adapters but I'm suspicious of all of them. DVI dual-link uses two transmitters which you can't just convert into a single transmitter at higher bandwidth--at least not with a passive or even a simple active adapter.
|
# ? Mar 8, 2022 03:36 |
|
Hah, victory! OK for the record (and thanks to everyone for thought-provoking comments): * Nouveau drivers. Possibly I wasn't even properly purging the NVidia drivers before, but also just switching to the nouveau drivers didn't do the trick alone. * HDMI 1.4 cable. * No extra xorg conf options. * Setting the nouveau.hdmimhz kernel parameter to something, in grub. I picked 250 for now just to nudge it above the pixel clock value required for 1440p/60Hz. Whew. e: ...annnnd my display performance is apparently garbage now, even with just desktop stuff. JLaw fucked around with this message at 19:13 on Mar 8, 2022 |
# ? Mar 8, 2022 06:42 |
Hey Linux folks, you want to update right the gently caress now. Or better yet, yesterday. ExcessBLarg! posted:A cursory glance of Amazon shows a bunch of DVI "dual-link" to HDMI adapters but I'm suspicious of all of them. DVI dual-link uses two transmitters which you can't just convert into a single transmitter at higher bandwidth--at least not with a passive or even a simple active adapter. With companies lying about what their cables do both on the low end (with examples like this) and at the high end (with examples like monster cables etc), it's sometimes seems impossible to get the right thing, without paying a premium for someone to validate them.
|
|
# ? Mar 8, 2022 12:31 |
|
I think Linus Tech Tips invested in a cable tester a bit back and ran through a lot of hdmi and displayport cables, with very varying results. The only real positive was that all the displayport cables they bought (as opposed to getting bundled) matched or surpassed the specs they claimed to support. I'd kind of love to see them add assorted adapters in the mix, if there's a way to make that work.
|
# ? Mar 8, 2022 15:47 |
|
Is it possible for a userspace application to attach to a kernel module in a way that the kernel module can call back into the userspace application? We want to do a diagnostic application and send some information out for recording when there are specific issues, and I'd prefer to do it without polling.
|
# ? Mar 8, 2022 21:53 |
Isn't that what every form of tracing does? It's certainly how dtrace works, because that's how OpenBSM and auditd is capable of doing full-system introspection.
|
|
# ? Mar 8, 2022 22:13 |
|
Rocko Bonaparte posted:Is it possible for a userspace application to attach to a kernel module in a way that the kernel module can call back into the userspace application? We want to do a diagnostic application and send some information out for recording when there are specific issues, and I'd prefer to do it without polling. This sounds like it might be a job for eBPF? Check this paper, specifically section 4.2
|
# ? Mar 9, 2022 01:18 |
|
I feel like I could bend that stuff to my will, but I was aiming for something like a logging system but without a huge volume of messages. I want to collect some data from our lab machines running all kinds of modules and stuff. A particular example would be a message to note that something we worked on is actually being run. I am thinking the existing tracing systems could be used so much as I could have the userspace thing tack on to a certain class of trace events to pass along the messages, but I am wouldn't know if these trace solutions allow for the kind of arbitrariness of these messages.
|
# ? Mar 9, 2022 04:29 |
|
I have a really weird problem I'm trying to solve with hopefully built in tools.. Is there a way to take a folder, create a tar file of said folder, but have that tar file be limited to a specific filesize so it creates subsequent tar files if necessary, BUT (and this is the part I can't figure out) each tarfile itself is a fully standalone package, that is to say doesn't truncate or split files across tarfiles. So I can do something like tar --tape-length=90000 -M -c --file=mytar{1..99}.tar SOURCEFOLDER and it'll create mytar1.tar, mytar2.tar, mytar3.tar, but the binaries which are caught at the tape limit are basically truncated and continue in the next. My goal here is to be able to take each tar file as its own artifact independent of the others, so this kind of relies on tar saying "oh yeah ok this next file won't fit in my size limit so it'll go in the next tar". In my ideal scenario, I could take mytar1.tar and extract it without mytar2 or 3, or do the same with 2 without 1 and 3 and the only consequence would be missing files, but not missing in the sense that half of it exists in this tar and it failed to materialize because the other half isn't available. Essentially all files would live 100% in one tar or another, not on the boundary. I'm articulating this beyond terribly but hopefully someone gets WHAT I'm trying to accomplish, if not why. The reason is really esoteric and specific to some tooling I have to conform to. If the --tape-length or the -M(ultiarchive) operator has some other flag that I'm missing which would render each individual tar non-reliant on the others that would be the best case scenario but I'm going through the docs and I'm not sure that exists. e: I'm also not married to tar command itself, just that the output HAS to be a tar file or series of tars so if there's something that'll do the job as a 3rd party tool I'm way ok with that too. e2: OK I think this might do what I need: https://github.com/dmuth/tarsplit some kinda jackal fucked around with this message at 18:24 on Mar 10, 2022 |
# ? Mar 10, 2022 18:04 |
|
|
# ? May 18, 2024 03:08 |
|
I don't think you're going to be doing that with a tar one liner. Multiarchive in particular explicitly states that it requires all tar files to be available to get anything out of the archive. Absent a tool to do exactly what you're asking, I think a script that walks the directory tree and builds a file list conforming to your max size might work. You feed that as an argument to tar and away you go. However that depends on how busy the tree you're archiving is.. writing your own backup tool means you need to take into account what happens if a file changes and that quickly puts you into "why did you do this to yourself" territory. I did a some google searches and did find this project that seems to accomplish what you're asking: https://github.com/dmuth/tarsplit So maybe that's a place to start. Still comes with the "what if the filesystem changes" problem though. edit - oops that's what I get for taking 30 minutes to type a post.
|
# ? Mar 10, 2022 18:36 |