Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
I use straight fancontrol from LM sensors to manage my fans. Not sure if it can create complex stuff, but I used to run it on a slow delay, to avoid my case can spinning on and off to often.

Adbot
ADBOT LOVES YOU

counterfeitsaint
Feb 26, 2010

I'm a girl, and you're
gnomes, and it's like
what? Yikes.

Tesseraction posted:

Might be a bit of an obvious question but did you try
code:
(sudo) modprobe amdgpu
?

I know just enough about linux to try a few random things but miss the super obvious stuff.

The problem was, unbeknownst to me, Mint decided to just stay on kernal 5.4 and never update. All the AMD integrated graphics stuff was added during 5.8. Updated the kernal and now everything is good.

Tesseraction
Apr 5, 2009

There's no shame in taking the long road to the answer; you often learn more along the way!

corgski
Feb 6, 2007

Silly goose, you're here forever.

VictualSquid posted:

I use straight fancontrol from LM sensors to manage my fans. Not sure if it can create complex stuff, but I used to run it on a slow delay, to avoid my case can spinning on and off to often.

Unfortunately looking at the manpage, I can't see any way to take readings from different sensors and average them together or any way to specify complex curves.

E: that said I got a passable setup by just using one of the motherboard sensors for the case fans and I don't actually run my hard disks that hard so they should be ok.


It looks like that's almost all laptop-specific utilities.

corgski fucked around with this message at 01:34 on Dec 30, 2022

His Divine Shadow
Aug 7, 2000

I'm not a fascist. I'm a priest. Fascists dress up in black and tell people what to do.
I installed Debian on a VM last night after spending some time with WSL and liking it, but experiencing some odd bugs in certain apps because it's not a real linux system. So I decided to go to the next step.

Unfortunately I downloaded the i368 iso thinking i386 = 32 and 64 bit for intel processors. But nooo I needed the iso labeled AMD64. So I deleted the VM and reinstalled everything.

Off to a good start :downs: Gonna start setting up emacs after work, planning on using doom emacs and making an environment for coding and learning C in.

Klyith
Aug 3, 2007

GBS Pledge Week

corgski posted:

E: that said I got a passable setup by just using one of the motherboard sensors for the case fans and I don't actually run my hard disks that hard so they should be ok.

Hard drives are pretty insensitive to heat, they'll be fine.
https://www.backblaze.com/blog/hard-drive-temperature-does-it-matter/

I used to use speedfan years ago, but tbqh most mobos have more than adequate fan control in bios these days. And while you can't use GPU temp, the generic system reading is good enough in most cases. It just takes a bit more time to set up nicely, since you have to test and reboot a few times.

corgski
Feb 6, 2007

Silly goose, you're here forever.

Klyith posted:

Hard drives are pretty insensitive to heat, they'll be fine.
https://www.backblaze.com/blog/hard-drive-temperature-does-it-matter/

I used to use speedfan years ago, but tbqh most mobos have more than adequate fan control in bios these days. And while you can't use GPU temp, the generic system reading is good enough in most cases. It just takes a bit more time to set up nicely, since you have to test and reboot a few times.

Good to know re hard disks, but I disagree about mobo fan control being any good, mine had two options: spool my case fans up and down constantly in response to any CPU temperature fluctuation or run them at 100 all the time. The only thing it seems to handle ok is the chipset fan which has its own sensor and spends most of its time stopped.

Tesseraction
Apr 5, 2009

His Divine Shadow posted:

I installed Debian on a VM last night after spending some time with WSL and liking it, but experiencing some odd bugs in certain apps because it's not a real linux system. So I decided to go to the next step.

Unfortunately I downloaded the i368 iso thinking i386 = 32 and 64 bit for intel processors. But nooo I needed the iso labeled AMD64. So I deleted the VM and reinstalled everything.

Off to a good start :downs: Gonna start setting up emacs after work, planning on using doom emacs and making an environment for coding and learning C in.

Yeah this is a downside to the way the 64-bit race played out - AMD's instruction set ended up winning the war so 64-bit versions are all AMD64. When I was first messing around with Linux I had to pick between i368, i486, i586 and i686, and I had no idea what that meant. Luckily for me I just picked i686 and it worked.

corgski posted:

Unfortunately looking at the manpage, I can't see any way to take readings from different sensors and average them together or any way to specify complex curves.

E: that said I got a passable setup by just using one of the motherboard sensors for the case fans and I don't actually run my hard disks that hard so they should be ok.

It looks like that's almost all laptop-specific utilities.

Almost yeah, but it mentions fancontrol-gui and this one which might better suit your needs? https://github.com/markusressel/fan2go

corgski
Feb 6, 2007

Silly goose, you're here forever.

Tesseraction posted:

Almost yeah, but it mentions fancontrol-gui and this one which might better suit your needs? https://github.com/markusressel/fan2go

Oh fan2go looks perfect, I must've skimmed past it!

BlankSystemDaemon
Mar 13, 2009



Tesseraction posted:

Yeah this is a downside to the way the 64-bit race played out - AMD's instruction set ended up winning the war so 64-bit versions are all AMD64. When I was first messing around with Linux I had to pick between i368, i486, i586 and i686, and I had no idea what that meant. Luckily for me I just picked i686 and it worked.
The sad part is, unless you're doing very specific operations of the kind usually found in userspace by a do-one-thing-well utility, the latency of instructions in the 486-686 microarchitectures often mean that a kernel using them ends up being slower than simply using the i386 equivalents, because it's usually processing many small chunks of data.
This is a fairly common argument for not building micro-optimized versions of software in general, especially if the machine building the software is also the one where the software is deployed on, because of the wasted amount of CPUtime.

There are exceptions to this, of course - for example, if you're deploying a particular piece of software onto a HPC cluster where it's going to stay for the foreseeable future because you're not doing regular updating (since access is tightly controlled, updates aren't as big of a deal as well as the fact that time spent updating is also time not spent processing).

What's perhaps somewhat interesting is that EMT64/x86_64/Intel 64 is not quite the same as AMD64 - although you're only likely to encounter them in systems programming, as both GCC and LLVM as well as several proprietary compilers (with the exception of Intel C Compiler, for example) will output machinecode that avoids these differences.
So far as I remember, AMD64 CPUs can't do SYSENTER/SYSEXIT whereas Intel 64 can do it in long mode, Intel 64 can't save reduced floating-point state, and BSF/BSR/FLD/SFTP instructions behave differently on AMD64 and Intel 64 respectively.

Tesseraction
Apr 5, 2009

BlankSystemDaemon posted:

<Good explanation of the differences.>

Oh huh, that's actually really interesting - so is that why they default back to i386 now for OS downloads? Also wasn't aware about the Intel/AMD split for x64.

BlankSystemDaemon
Mar 13, 2009



Tesseraction posted:

Oh huh, that's actually really interesting - so is that why they default back to i386 now for OS downloads? Also wasn't aware about the Intel/AMD split for x64.
At this point, there really isn't much reason to stick with i386 (*1) - you can't buy a CPU that doesn't support some long mode (and haven't been able to for at least a decade, though older implementations had more errata).

The biggest reason i386 is the default is probably that there is an ever-smaller amount of people who insist that because i386 software can only access (2^16)-1 bytes (*1), that means it's better for systems which don't have a lot of memory, when the users insist on using software that uses a lot of memory (such as a modern web browser).
These people don't realize that the commodity of compute and storage are to blame for the much higher memory requirements, and that instead of harping on about using decades-old hardware, they should've been telling developers not to use as much memory as they could possibly want.

Besides, it'll all fix itself some time in 2038.

*1: And some don't, for example FreeBSD generates i486 binaries, because it's simply not fun to do anything without an FPU. EDIT: Turns out, FreeBSD generates Pentium Pro/i686 code starting with FreeBSD 13.
*2: There's one exception, since FreeBSD has some code which makes it possible for both the kernel as well as userspace to address 4GB each, as well as other code that makes it so that a kernel without Physical Address Extensions can handle up to 24GB of memory - and if you ask me to explain either pieces of code, I'm going to be very sad because I'm convinced it's magic.

EDIT2: I should probably also mention that FreeBSD "i386" is the only OS I know of that has 64bit atomic operations for 32bit - which has implications for ZFS, since it relies on 64bit atomics.
Whether this means anything in practice is doubtful, since ZFS will just discard any transaction group that hasn't been atomically written to disk in case of a data loss event.

BlankSystemDaemon fucked around with this message at 02:20 on Dec 31, 2022

Computer viking
May 30, 2011
Now with less breakage.

I personally find the x32 ABI a fun idea - use all the other amd64 features, but stick to 32-bit pointers to reduce memory overhead. I don't really see it making a lot of sense outside very memory constrained yet still amd64 systems, and that's hardly a huge market. Still, kind of neat.

Anarchy Stocking
Jan 19, 2006

O wicked spirit born of a lost soul in limbo!

These were great posts. Thank you!

BlankSystemDaemon
Mar 13, 2009



Computer viking posted:

I personally find the x32 ABI a fun idea - use all the other amd64 features, but stick to 32-bit pointers to reduce memory overhead. I don't really see it making a lot of sense outside very memory constrained yet still amd64 systems, and that's hardly a huge market. Still, kind of neat.
While that's an interesting idea, I'm not quite sure I see the utility.
Performance-sensitive workloads are going to benefit from having 64bit pointers simply because it lets the process cache more data in memory, and anything that isn't performance sensitive isn't going to benefit from fp arithmetic being done using SSE registers for reasons that seem pointless to belabour.

Meanwhile, the main use-case seems be "extreme benchmarking" (whatever that is, it sounds like some kind of extremely nerdy "sport" - and that's me saying that) whereas the slides from the presentation on it citing a SPEC benchmark only give between 4-8% performance benefit.
Since we can't see average/median/mean averages, standard deviation confidence interval, or anything else that can be used to validate whether that's in the margin of error, I''m tempted to say that it is.

BlankSystemDaemon fucked around with this message at 21:37 on Jan 1, 2023

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

BlankSystemDaemon posted:

Performance-sensitive workloads are going to benefit from having 64bit pointers simply because it lets the process cache more data in memory, and anything that isn't performance sensitive isn't going to benefit from fp arithmetic being done using SSE registers for reasons that seem pointless to belabour.

For a long time Chrome and Firefox both resisted going to 64-bit applications because browsers have a lot of pointers in them and doubling the size of them means caches are 20-50% less effective, especially d$. Browsers care a bit about about overall RAM usage (no, really) but they really care about cache utilization, and “just switch to x86-64” was a measurable performance hit in a lot of cases. One of the V8 folks has a blog post about compressing pointers down to 32 bits in some hot structures and it made a material difference in tight-loop workloads, but of course I can’t find it now. Additional integer ops were cheaper than the cache costs, as I recall the punchline.

Chrome’s multiprocessing model means a per-process 4G limit isn’t a big a deal, also, and a lot of applications aren’t really limited by their capacity for RAM-cacheable data as much as they are by being able to chew data fast enough through the d$ and CPU.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

BlankSystemDaemon posted:

While that's an interesting idea, I'm not quite sure I see the utility.
Performance-sensitive workloads are going to benefit from having 64bit pointers simply because it lets the process cache more data in memory, and anything that isn't performance sensitive isn't going to benefit from fp arithmetic being done using SSE registers for reasons that seem pointless to belabour.

Meanwhile, the main use-case seems be "extreme benchmarking" (whatever that is, it sounds like some kind of extremely nerdy "sport" - and that's me saying that) whereas the slides from the presentation on it citing a SPEC benchmark only give between 4-8% performance benefit.
Since we can't see average/median/mean averages, standard deviation confidence interval, or anything else that can be used to validate whether that's in the margin of error, I''m tempted to say that it is.

Take it from Java-land, where they take advantage of Java's minimum alignment being pretty big and heap memory space not being byte-addressable to use 32-bit pointers all the way up to 32GB of heap size: 32-bit pointers are a decently large performance benefit because more pointers fit into cache as mentioned, and to boot in programs with tons of pointers the memory saved on just storing pointers can be significant and add up to some real memory:

https://blog.codecentric.de/35gb-heap-less-32gb-java-jvm-memory-oddities
https://confluence.atlassian.com/jirakb/do-not-use-heap-sizes-between-32-gb-and-47-gb-in-jira-compressed-oops-1167745277.html

32-bit pointers are such a huge improvement all around that Atlassian recommends that you should give JIRA 32GB of heap and it will perform better than JIRA with 45GB of heap!

The browser vendors do a significant amount of work to keep using 32-bit pointers in most parts of their applications. One of these is probably the article Subjunctive was referencing:

https://v8.dev/blog/oilpan-pointer-compression - Pointer compression for their DOM GC.
https://v8.dev/blog/pointer-compression - Pointer compression across the whole browser

Twerk from Home fucked around with this message at 22:23 on Jan 1, 2023

pseudorandom name
May 6, 2007

x32 was some idiocy that nobody outside of Intel ever cared about and even Intel gave up on it.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨


This was the one, thank you! Some smart motherfuckers on that team.

Pointer compression in a 64-bit-pointer ISA is probably a better approach for large working set apps than x32 though because pointer compression is enough engineering work that you don’t want to have to do it everywhere in order to address more than 4GB. Bring back the segment register!

BlankSystemDaemon
Mar 13, 2009



Subjunctive posted:

For a long time Chrome and Firefox both resisted going to 64-bit applications because browsers have a lot of pointers in them and doubling the size of them means caches are 20-50% less effective, especially d$. Browsers care a bit about about overall RAM usage (no, really) but they really care about cache utilization, and “just switch to x86-64” was a measurable performance hit in a lot of cases. One of the V8 folks has a blog post about compressing pointers down to 32 bits in some hot structures and it made a material difference in tight-loop workloads, but of course I can’t find it now. Additional integer ops were cheaper than the cache costs, as I recall the punchline.

Chrome’s multiprocessing model means a per-process 4G limit isn’t a big a deal, also, and a lot of applications aren’t really limited by their capacity for RAM-cacheable data as much as they are by being able to chew data fast enough through the d$ and CPU.
This seems incredibly counter-intuitive when you contrast it with the notion of how detailed knowledge of the sub-syscall systems can improve performance.

Twerk from Home posted:

Take it from Java-land, where they take advantage of Java's minimum alignment being pretty big and heap memory space not being byte-addressable to use 32-bit pointers all the way up to 32GB of heap size: 32-bit pointers are a decently large performance benefit because more pointers fit into cache as mentioned, and to boot in programs with tons of pointers the memory saved on just storing pointers can be significant and add up to some real memory:

https://blog.codecentric.de/35gb-heap-less-32gb-java-jvm-memory-oddities
https://confluence.atlassian.com/jirakb/do-not-use-heap-sizes-between-32-gb-and-47-gb-in-jira-compressed-oops-1167745277.html

32-bit pointers are such a huge improvement all around that Atlassian recommends that you should give JIRA 32GB of heap and it will perform better than JIRA with 45GB of heap!

The browser vendors do a significant amount of work to keep using 32-bit pointers in most parts of their applications. One of these is probably the article Subjunctive was referencing:

https://v8.dev/blog/oilpan-pointer-compression - Pointer compression for their DOM GC.
https://v8.dev/blog/pointer-compression - Pointer compression across the whole browser
Java is garbage collected though, and while garbage collection has surely gotten better since 2005, when it was established that garbage collection is always a bad idea.

Tesseraction
Apr 5, 2009

Twerk from Home posted:

32-bit pointers are such a huge improvement all around that Atlassian recommends that you should give JIRA 32GB of heap and it will perform better than JIRA with 45GB of heap!

Counterpoint: Jira is best with 0GB because you didn't install that bane of my existence.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨


What do those have to do with each other? Varnish is basically a syscall generation machine trading in I/O operations as fast as it can. Browsers really are not, and they do a lot more userspace work per system call than a lightweight proxy. (Which is to Varnish's credit, to be clear. If it were doing as much work per syscall as Chrome does, it would be terribly slow.)

I love PHK's work, but I don't see how what he says there conflicts with Chrome and Firefox's empirical findings. He says, in the article I skimmed because you didn't say what about it was relevant:

quote:

We lack a similar inquiry into algorithm selection in the face of the anisotropic memory access delay caused by virtual memory, CPU caches, write buffers, and other facts of modern hardware.

and I'm talking exactly about analysis of CPU caches and their tradeoffs against additional integer work. AFAIK he's saying to do the work that the v8 team describes in oilpan and elsewhere: design data structures for the properties of modern hardware. Is there some specific conclusion from the article that you meant me to take away related to the effects of pointer width on cache and memory efficiency?

quote:

Java is garbage collected though, and while garbage collection has surely gotten better since 2005, when it was established that garbage collection is always a bad idea.

:rolleye:

What about Java's garbage collector do you think invalidates the savings of pointer compression? If those object references were somehow statically deallocated because of magic[*], how would it not be beneficial to compress the pointers and reduce both cache and virtual memory/TLB pressure?

(Reference counting, a form of non-Appel garbage collection used by Varnish, can also cause unfortunate application pauses due to finalization cascade, wherein dropping the last reference to an object causes it to drop the last reference to another object, which in turn etc., meaning that a bunch of otherwise untouched memory is read back and often mutated, causing a bunch of cache thrashing or even page-ins.)

[*] and you'd require magic, because the lifecycle of objects isn't always knowable statically; it can depend on properties of the program's input. This is why garbage collection in the form of reference counting is so common in non-trivial programs written in languages without built-in garbage collection, be that reference counting like Python and Swift and Rust where the ubiquitous std::rc::Rc is employed, or mark-and-sweep like JS and Ruby and Java, or hybrid systems with refcounting backed up by infrequent marking and sweeping for cycle collection.

v1ld
Apr 16, 2012

The pointer compression refs were neat, thanks for sharing. Hadn't seen that before.

v1ld fucked around with this message at 06:02 on Jan 2, 2023

RFC2324
Jun 7, 2012

http 418

I love it when this thread deep dives like this. Always learn something

Tacos Al Pastor
Jun 20, 2003

Has anyone used youtube-dl? I'm wondering if there is a way to speed up a download. There is a "-r" option which I tried to set to the example 4.2M, but that didnt seem to work. Its download everything at the KiB/s rate.

Voodoo Cafe
Jul 19, 2004
"You got, uhh, Holden Caulfield in there, man?"

Tacos Al Pastor posted:

Has anyone used youtube-dl? I'm wondering if there is a way to speed up a download. There is a "-r" option which I tried to set to the example 4.2M, but that didnt seem to work. Its download everything at the KiB/s rate.

Use yt-dlp instead, it uses some workarounds to get around Google's bandwidth limits

Klyith
Aug 3, 2007

GBS Pledge Week
Yeah google is constantly changing youtube to make downloading harder, and youtube-dl constantly updated to get around the rate limiting. But youtube-dl itself isn't being updated anymore after a legal to-do from google. Yt-dlp is now the main one.

Tacos Al Pastor
Jun 20, 2003

Klyith posted:

Yeah google is constantly changing youtube to make downloading harder, and youtube-dl constantly updated to get around the rate limiting. But youtube-dl itself isn't being updated anymore after a legal to-do from google. Yt-dlp is now the main one.

Thanks guys! Ill give it a shot and let you know how this works out

edit: worked great. So much faster. Thanks!

Tacos Al Pastor fucked around with this message at 08:17 on Jan 3, 2023

Tesseraction
Apr 5, 2009

Klyith posted:

Yeah google is constantly changing youtube to make downloading harder, and youtube-dl constantly updated to get around the rate limiting. But youtube-dl itself isn't being updated anymore after a legal to-do from google. Yt-dlp is now the main one.

How does yt-dlp avoid the same legal to-do? I guess their page deliberately obscures who runs the project... does that work?

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Tesseraction posted:

How does yt-dlp avoid the same legal to-do? I guess their page deliberately obscures who runs the project... does that work?

Maybe the legal issues were the underlying cause, but youtube-dl lost most of its userbase to yt-dlp after the maintainer of the original project basically dropped development for over half a year back in 2021. The fork is just a lot better at this point while still maintaining full backwards compatibility, while the last tagged release of youtube-dl is still being throttled and doesn't really offer any other advantage, so from an end-user perspective at least I don't know why you'd not just go with yt-dlp.

Tesseraction
Apr 5, 2009

Ah okay so it's more likely just the standard open source software case of the developer returned to their home planet.

RFC2324
Jun 7, 2012

http 418

Tesseraction posted:

Ah okay so it's more likely just the standard open source software case of the developer returned to their home planet.

The main reason to never engage in oss development

Klyith
Aug 3, 2007

GBS Pledge Week

Tesseraction posted:

Ah okay so it's more likely just the standard open source software case of the developer returned to their home planet.

Google sent a dmca takedown to github and the host of their .org site got sued. So there was never a direct legal threat against the author, but google was apparently going after all distribution.

The eff got involved and google got a lot of blowback from the nerd community. Eventually github put it back up and was like "if google wants to sue us, we'll take it".

Afaik the main author(s) never came back, because why would you after that for a project that needs constant work and probably got little in the way of donations or money? Less "returned to home planet" and more "gently caress this noise I'm out".

And yt-dlp had already been an active project with more features at the time, just less well known.

Tesseraction
Apr 5, 2009

FWIW I'm not saying the dev didn't have the right idea, and peacing out is another way of returning to the home planet. I'm not saying he was killed off-screen.

Or was he?

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Klyith posted:

why would you after that for a project that needs constant work and probably got little in the way of donations or money

Every parent in this thread is all :hmmyes:

CaptainSarcastic
Jul 6, 2013



I sometimes wonder (though not strongly enough to do the research, obviously) how long-running open source projects like GiMP, or VLC or Filezilla or Audacity manage to keep going for years and years and years.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



CaptainSarcastic posted:

I sometimes wonder (though not strongly enough to do the research, obviously) how long-running open source projects like GiMP, or VLC or Filezilla or Audacity manage to keep going for years and years and years.

Lots of maintainers so even if some folks return to their home planets they don't leave the project without a rudder.

Tesseraction
Apr 5, 2009

I figure that if an open source project reaches a certain critical mass of maintainers it can go on indefinitely. Otherwise it's usually one person doing it for free eventually having to focus on their daily commitments and not having anyone to pass the torch to.

Then there's people like the guy who makes foobar2000 who's just kept on trucking along for 20 years straight. Technically his project isn't open source, but he's openly stated it's just to prevent forks while it's still being actively updated by him and that he releases the applicable parts open source in other projects.

F_Shit_Fitzgerald
Feb 2, 2017



I'm so dumb. I update my system every Monday, and for a while now I've gotten the 'no public_key ##################' error. Looking it up, you can invoke

code:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv keys ###########
to fix the issue, and that prompted what was doubtlessly months worth of updates to my kernel. So my question: is there a way to automate this process so that if there's ever another public key error, I can download the key right away and avoid this issue?


That's the thing about Linux: just when you feel like you've gotten a grip on managing your system, something happens to humble you. Linux is more of a journey than a destination.

Adbot
ADBOT LOVES YOU

Tesseraction
Apr 5, 2009

There's two solutions here https://stackoverflow.com/questions/69896726/how-to-automate-installation-of-missing-gpg-keys-on-linux

But you could probably roll your own along the lines of

code:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv keys $(apt-get update 2>&1 | grep -Po "(?<=NO_PUBKEY )\w+")
I just pulled that regex out of my behind so id someone chould make sure I haven't made a dumb error that would be appreciated.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply