|
Subjunctive posted:Keep looking!
|
# ? Mar 29, 2024 23:12 |
|
|
# ? May 16, 2024 17:58 |
|
Klyith posted:oh god why There's always one, and everyone knows them.
|
# ? Mar 30, 2024 00:21 |
|
lol, oof Worth linking: https://www.openwall.com/lists/oss-security/2024/03/29/4 And if you use Arch you should read: https://archlinux.org/news/the-xz-package-has-been-backdoored/ and https://security.archlinux.org/ASA-202403-1 quote:Summary quote:Impact
|
# ? Mar 30, 2024 01:25 |
|
Lol, but it's also depressing that there's not really a way to get testing infrastructure to find this stuff out until some gormless idiot using arch linux steps right into a cow pie. I remember reading... a linus email? about this kind of situation about kernel testing RCs. God bless you bleeding edge rolling releasers
|
# ? Mar 30, 2024 02:10 |
hifi posted:Lol, but it's also depressing that there's not really a way to get testing infrastructure to find this stuff out until some gormless idiot using arch linux steps right into a cow pie. I remember reading... a linus email? about this kind of situation about kernel testing RCs. God bless you bleeding edge rolling releasers
|
|
# ? Mar 30, 2024 02:15 |
|
The threat actor used sockpuppets to run lowkey harassment campaign to encourage the xz maintainer to accept a co-maintainer, became the co-maintainer, then systematically sabotaged the project's fuzzing, regression testing and security response infrastructure. It was only noticed because a PostgreSQL developer was trying to profile changes on an idle system, wondered why sshd was using so much CPU time and then got suspicious when the perf traces couldn't attribute samples to any known function in liblzma.
|
# ? Mar 30, 2024 02:16 |
|
BlankSystemDaemon posted:Did you not notice that the APT deliberately introduced a new testing suite to turn off fuzzing? I am on Team Fedora
|
# ? Mar 30, 2024 02:17 |
|
I like the part where OpenSSH sshd doesn't link against liblzma anyways, but Debian et al. use a patched version that links against libsystemd which links against liblzma. Didn't Debian learn after 2008 not to mess with sshd?
|
# ? Mar 30, 2024 02:39 |
|
hifi posted:I am on Team Fedora Edit: there's all kinda extra legwork this person or entity did, this is from HN (ugh): quote:Fascinating. Just yesterday the author added a `SECURITY.md` file to the `xz-java` project. quote:Out of curiosity I looked at the list of followers of the account who committed the backdoor. Less Fat Luke fucked around with this message at 02:44 on Mar 30, 2024 |
# ? Mar 30, 2024 02:41 |
|
hifi posted:Lol, but it's also depressing that there's not really a way to get testing infrastructure to find this stuff out until some gormless idiot using arch linux steps right into a cow pie. I remember reading... a linus email? about this kind of situation about kernel testing RCs. God bless you bleeding edge rolling releasers what would you have had the testing infrastructure do, in this case?
|
# ? Mar 30, 2024 02:46 |
|
Subjunctive posted:what would you have had the testing infrastructure do, in this case? I am wishing for a magic wand you can wave and catch all the software bugs ever created
|
# ? Mar 30, 2024 02:56 |
|
There's a lot of interesting components to this whole debacle, but one part that stands out is that the malicious autoconf m4 macro only exists in the upstream release tarball (still hosted on GitHub) but not in the source repo. So anyone looking at a post-configure build of the tarball would see the injected malicious source, but anyone cloning from git wouldn't. And yet the two are "close enough" that missing autoconf bits may easily be excused by someone casually comparing the two. Going forward I think a reasonable policy change on Debian's side would be that any upstream project that uses git or similar revision control (which is most everything now) should always be built from a clone/checkout of an upstream repo tag or commit id, and not from separately packaged source tarball, to ensure that exactly this scenario doesn't happen. Not that you can't have bad actors on the distribution package maintainer side either.
|
# ? Mar 30, 2024 05:15 |
|
ExcessBLarg! posted:I like the part where OpenSSH sshd doesn't link against liblzma anyways, but Debian et al. use a patched version that links against libsystemd which links against liblzma. lol if this decision was influenced by the same group
|
# ? Mar 30, 2024 05:30 |
|
Less Fat Luke posted:Edit: there's all kinda extra legwork this person or entity did, this is from HN (ugh): Not going to lie, this is pretty impressive. We've been worrying for years about supply chain attacks and other malicious actors in FOSS, with relatively few examples to point at. This feels like all those worries materialised: a kernel-adjacent project that will allow some dude behind a VPN to become co-maintainer, and poor CI practices allowing a mismatch between the git and the released artifacts. We'll be referencing this case for decades.
|
# ? Mar 30, 2024 09:15 |
|
Oh yeah it's gonna be a hugely valuable case study. At my last job we were rolling out supply chain integrity stuff around the Ruby community and getting people onboard was like pulling teeth - this should make it a lot easier to explain why it's more important.
|
# ? Mar 30, 2024 14:20 |
|
quote:We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said. "He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise." This might be a dumb question but why do there have to be "binary test files", rather than sample data generated from testing code? mila kunis fucked around with this message at 14:23 on Mar 30, 2024 |
# ? Mar 30, 2024 14:21 |
|
xz is a compression algorithm. I believe the xz files containing the object to be injected were part of the tests to see if the built code could uncompress it (and then compare the output to a hash of the uncompressed file) or see if a recompressed version of the file was as compressed or better than some previously recorded result. You need some sample files to be able to test for correctness and that there have been no performance regressions, but those types of files are expected to be totally benign. waffle iron fucked around with this message at 14:35 on Mar 30, 2024 |
# ? Mar 30, 2024 14:26 |
|
waffle iron posted:xz is a compression algorithm. I believe the xz files containing the object to be injected were part of the tests to see if the built code could uncompress it (and then compare the output to a hash of the uncompressed file) or see if a recompressed version of the file was as compressed or better than some previously recorded result. Right, what I'm asking is - can't test code generate these files rather than having strange binaries checked in? Have your test code make a large file, then test your compression code on it. That way your test data source code is also checked in in and verifiable? I don't work on this kinda stuff and my knowledge is limited, so wondering if that kinda thing is viable
|
# ? Mar 30, 2024 14:34 |
|
In theory you could test performance of hard to compress data by creating a file from /dev/random or similar, but that would not be representative of real world performance. And isn't the level of repeatability that you need in testing.
|
# ? Mar 30, 2024 14:35 |
|
mila kunis posted:Right, what I'm asking is - can't test code generate these files rather than having strange binaries checked in? How would your test code generate realistic compression targets like
|
# ? Mar 30, 2024 14:37 |
|
One final point. Automated testing of software works on highly crafted scenarios with a well defined previously computed result and each test case relies on as little of the target platform as possible. In my bad example of using target system randomness to test, how do you even know the platform's kernel gives you sufficiently random output?
|
# ? Mar 30, 2024 14:42 |
|
mila kunis posted:This might be a dumb question but why do there have to be "binary test files", rather than sample data generated from testing code? Outliers from bug reports is one of them. Organic data rarely matches test cases.
|
# ? Mar 30, 2024 15:04 |
|
mila kunis posted:Have your test code make a large file, then test your compression code on it. That way your test data source code is also checked in in and verifiable? I think your test binaries want to be things with "weird" attributes that are either difficult for your algorithm or hammer on edge cases that might break it. The compression dictionary uses 32-bit words, here's a file full of 31-bit patterns. A bug 5 years ago would cause a crash if your file had a particular stretch of zeros, so this one makes sure we don't regress that. Stuff like that. (I also know nothing about xz data compression. But in a similar area, lossy music compression, there is the concept of "problem samples" -- particular bits of music with qualities that are bad for the algorithms. The people who make and test audio codecs have a bunch of standard ones they've found over the years that they pass around in flac format.) IMO the culprit to focus on isn't the test files, it's that the standard build system in unix uses an ancient script lang that's obfuscated even when you aren't using it to stealth a vulnerability into sshd.
|
# ? Mar 30, 2024 15:06 |
|
The two test files are bad-3-corrupt_lzma2.xz and good-large_compressed.lzma. The first one sounds like the sort of thing where it's easier to ship a custom crafted file than generating it on the fly?
|
# ? Mar 30, 2024 17:23 |
|
Heck, it even had bad in the name, what more do you want?
|
# ? Mar 30, 2024 17:41 |
|
Even seemingly regular things can break software. So testing as many different things as possible is the path. Remember when Dreamweaver would crash when opening a file that was an exact multiple of 8KiB? That was a fun one.
|
# ? Mar 30, 2024 17:46 |
|
hifi posted:Lol, but it's also depressing that there's not really a way to get testing infrastructure to find this stuff out until some gormless idiot using arch linux steps right into a cow pie. I remember reading... a linus email? about this kind of situation about kernel testing RCs. God bless you bleeding edge rolling releasers This was first discovered in debian sid, and I don't think this ever got into Arch. Even the arch security notice is overly cautious. Not only does sshd not link against libsystemd on arch, my understanding is the payload is only packaged in .rpm and .deb formats, it literally never went out to arch users at all. They rebuilt from the repo source, but I've seen at least one person say the 'fixed' and 'bad' binaries are identical anyway. That said, everybody needs to (and most already are) looking at how to roll everything back before this person got involved two years ago. There's hundreds and hundreds of commits that are all questionable. This was I think even more lucky than people realize too. It sounds like libsystemd was removing the dependency on liblzma soon, so there was a window here for the big debian & fedora spring releases to hit the sweet spot, and they probably rushed things too much and got caught. Rescue Toaster fucked around with this message at 23:18 on Apr 1, 2024 |
# ? Apr 1, 2024 23:15 |
|
whats the best thing to put on a 12 inch touchscreen? its mainly gonna be a media device sort of thing. i just want something thats got tablet / touch screen use in mind how does speech to text work? is it built in or w/e? i prefer to dictate my shitposts etc Worf fucked around with this message at 04:10 on Apr 2, 2024 |
# ? Apr 2, 2024 04:06 |
|
Worf posted:how does speech to text work? is it built in or w/e? i prefer to dictate my shitposts etc I was using the program nerd dictation, which is designed to just run in the background with a keyboard shortcut and generate keyboard events, for some stuff before and I haven't tested it since switching to wayland but I think it does work with wayland. There may be something more convenient by now. (Actually apparently they added some transcription support using vosk in kde 6 but just to kmail which is stupid) mystes fucked around with this message at 04:34 on Apr 2, 2024 |
# ? Apr 2, 2024 04:31 |
|
hifi posted:Lol, but it's also depressing that there's not really a way to get testing infrastructure to find this stuff out until some gormless idiot using arch linux steps right into a cow pie. I remember reading... a linus email? about this kind of situation about kernel testing RCs. God bless you bleeding edge rolling releasers Pretty amazing how mad some people get about Arch Linux. Who hurt you?
|
# ? Apr 2, 2024 05:23 |
|
ubuntu works very poorly with screen autorotate on a tablet / detached 2 in 1 i went around to some forums etc but people's solutions didnt work so im probably gonna shelve the idea of using ubuntu on this for now and see if windows handles that well, until i can maybe find a distro that handles more of the basic functionality better the hardware itself is really awesome for the price though, and the form factor is surprisingly useable as a tablet. (latitude 7210 2-1). maybe not for most people i guess but my hands are big enough to palm a basketball and i use my ipad mini in one hand like a phone lol
|
# ? Apr 6, 2024 10:58 |
|
Worf posted:ubuntu works very poorly with screen autorotate on a tablet / detached 2 in 1 It's gotta be a DE issue more than a Distro issue surely? Which DE were you using?
|
# ? Apr 6, 2024 14:15 |
|
ziasquinn posted:It's gotta be a DE issue more than a Distro issue surely? Which DE were you using? GNOME comes with it off the ubuntu website, I didn't change anything other than trying the troubleshooting steps I saw on multiple posts addressing the same issue.
|
# ? Apr 7, 2024 10:29 |
|
I just mean that I wonder if KDE handles auto rotation easier? Surprised (barely) that GNOME the one that is trying to capture touch interface users would suck at it fwiw I love gnome but recognize its failures
|
# ? Apr 7, 2024 18:16 |
|
Is there a way to dynamically control GPU fan speed in Linux? Specifically it's for an RTX 4070 Super. I can manually set the fan speeds using the NVIDIA control panel, but I'd rather set an actual fan curve instead of just having it on jet engine mode during heavy load times.
|
# ? Apr 10, 2024 14:32 |
|
DizzyBum posted:Is there a way to dynamically control GPU fan speed in Linux? Specifically it's for an RTX 4070 Super. I can manually set the fan speeds using the NVIDIA control panel, but I'd rather set an actual fan curve instead of just having it on jet engine mode during heavy load times. green with envy coolercontrol if you want to do other fans too
|
# ? Apr 10, 2024 15:12 |
|
DizzyBum posted:Is there a way to dynamically control GPU fan speed in Linux? Specifically it's for an RTX 4070 Super. I can manually set the fan speeds using the NVIDIA control panel, but I'd rather set an actual fan curve instead of just having it on jet engine mode during heavy load times. If you use GWE, I'd be curious to see how you set the curves.
|
# ? Apr 10, 2024 18:13 |
|
Thinking of dual booting a Linux distro alongside Windows 11. I didn't think it was feasible since I have BitLocker enabled with Windows, but Copilot tells me I can disable BitLocker, give Linux its own partition and install, then re-enable BitLocker when I log into Windows. Firstly, is anyone doing this and if so, any problems? Secondly, I am a complete Linux noob so I need baby's first Linux distro - I have played about with distros in the past but not outside a VM. Zorin OS is quite nice in a virtual machine. Recommendations?
|
# ? Apr 10, 2024 18:48 |
|
I'd just go with Mint for a noob distro. It's fine, it works, everything is good. Dual booting I've always been advised is a ballache and best avoided.
|
# ? Apr 10, 2024 18:52 |
|
|
# ? May 16, 2024 17:58 |
|
WattsvilleBlues posted:Thinking of dual booting a Linux distro alongside Windows 11. I didn't think it was feasible since I have BitLocker enabled with Windows, but Copilot tells me I can disable BitLocker, give Linux its own partition and install, then re-enable BitLocker when I log into Windows. I have used Pop! OS on a number of different machines since 2020. It's built on Ubuntu but adds features, removes some bloat, and makes various tweaks to make it nicer to use. The software repositories are updated more frequently too. Pretty much any Ubuntu-specific tutorials, and Ubuntu packages, will work without modification. Here is a guide I have used to set up dual boot: https://github.com/spxak1/weywot/blob/main/Pop_OS_Dual_Boot.md
|
# ? Apr 10, 2024 19:04 |