|
Rawrbomb posted:I'm not sure if I've understood your usecase, but VS Code with VS Code remote plugin, might be easy way of going about this? You can just SSH (via VS Code) into the machine, and then you get local editing support. https://code.visualstudio.com/docs/remote/remote-overview Yes, thank you. I have this, too, in addition to a systemd-launched code-server I can connect to with my browser, so I don't have to run electron. I also have acme from Plan 9, but since it uses X11, the laptop won't go to sleep until I terminate the WSL2 container. My use case is setting up a development environment for technical writers who run Windows and who are not normally Internet connected; and also forcing myself to use Windows because I really ought to understand it better. I appreciate everyone trying to help! Mostly I just needed to complain cruft fucked around with this message at 17:04 on Dec 18, 2023 |
# ? Dec 18, 2023 16:57 |
|
|
# ? Jun 6, 2024 13:55 |
|
cruft posted:Yes, thank you. I have this, too, in addition to a systemd-launched code-server I can connect to with my browser, so I don't have to run electron. I also have acme from Plan 9, but since it uses X11, the laptop won't go to sleep until I terminate the WSL2 container. Technical writers are gonna love acme plan9 owns how's your acme story?
|
# ? Dec 18, 2023 18:19 |
|
BlankSystemDaemon posted:Well, it's a hypervisor with a bit of thin film to paint over that fact. While it is extremely similar to the linuxlator, they probably got the idea from SFU/SUA and the even older POSIX subsystem which did the same "POSIX as a kernel personality" idea, just with "generic UNIX-y OS with unique quirks" instead of "Linux" as the target platform. It was around from 1993 to Windows 7, with 8 as the weird gap before WSL1 took over. I don't really like working on windows, but the kernel side has some nice ideas - like the personalities. Which, yes, have a definite parallel on FreeBSD. Wasn't there also a way to run some other UNIX binaries in the older releases? Computer viking fucked around with this message at 18:52 on Dec 18, 2023 |
# ? Dec 18, 2023 18:45 |
|
mawarannahr posted:plan9 owns how's your acme story? I spent a couple years using acme exclusively. Here's a review! Cool stuff:
Not so cool stuff:
Super annoying stuff:
If somebody made an editor like Acme for Wayland and/or Windows, where I could just use sed and awk and bourne shell to do things, I might give it serious consideration. But at this point I don't think I can go back to Acme except for nostalgia's sake.
|
# ? Dec 18, 2023 19:08 |
do we have a cloud computing thread? My work has a parametric math model we run in c / python which takes 5-30 minutes to complete on the standard laptop we have depending on the model size, the number of course, the number of parameters etc. If we can get all 21 parameters calculated at the same time that would be great, but that either means getting a threadripper or xeon or using the bigger ec2 instances. Actually since the parameters are perfectly independent, perhaps we could spool up 21 different tiny instances each one running a parameter...
|
|
# ? Dec 19, 2023 03:39 |
There is the VM thread which would have a pretty heavy overlap with the cloud
|
|
# ? Dec 19, 2023 03:49 |
|
Watermelon Daiquiri posted:do we have a cloud computing thread? My work has a parametric math model we run in c / python which takes 5-30 minutes to complete on the standard laptop we have depending on the model size, the number of course, the number of parameters etc. If we can get all 21 parameters calculated at the same time that would be great, but that either means getting a threadripper or xeon or using the bigger ec2 instances. I'm guessing you're asking because you have a lot of these things to run. How much resources can each one effectively use? How much RAM? How many threads? If you start testing them on bigger machines you'll quickly run into NUMA if the threads do a lot of synchronization or coordination with each other, which can make scaling to a large number of threads a pretty poor option vs just letting it run longer on 4-8 threads. You also have to choose if you care about high-value throughput or latency. How soon do you need results? Is it OK if there's a queue of work to be done waiting to be picked up while a cluster of workers chew through the work, or do you need things done ASAP and you're willing to pay for it? If each individual parameter's workload is small (less than 10GB and 15 minutes on 2 cores), you could run it on a serverless solution like AWS Lambda. That'd be really easy scaling, because Amazon will happily let you run tens of thousands of these simultaneously with a minimum of fuss. There's a cost premium on this, but if your workload fits into it it's probably worth it so you don't have to have any persistent instances of any kind, and Amazon takes care of most of the harder stuff. If you have tons of these to run, it's OK if they sit in a queue, and you want a more predictable budget while it's crunching on data, you're into job scheduler or batch processing territory. One relevant Amazon offering there is AWS Batch: https://aws.amazon.com/batch/features. It also has a serverless mode on AWS Fargate where you just have each job ask for a certain amount of CPU / Memory and it just gets it, no muss no fuss. Another option would be any modern job scheduler, which includes SLURM or Cromwell. AWS Parallelcluster can instantly set up a SLURM cluster for you that will accept jobs to run and schedule them across EC2 nodes. For your workload you probably won't need the unified filesystem, which is by far the most expensive part of Parallelcluster / SLURM on AWS. I don't know how your organization's IT competency is, but all of these options are going to be vastly more expensive than buying some servers. In my opinion cloud offers much greater value for money for running services, especially when you need good CDN integration and consistent latency to users all over. Batch processing on the cloud still involves the big cost premium but you don't get as much benefit. You can cost-manage the cloud well with things like committed use agreements and up-front payment but it's a whole complex rats nest. I don't know how small and scrappy you are, but if these things are running on your laptops why not buy some Dell desktops and just let them rip? I can't share exact pricing but my organization can buy 16 core Dell desktops for well under $1k each. Strap 15 of those together into a SLURM cluster yourself and you're cooking with gas. Edit: Amazon's cleanest fit for what you're wanting to do if you do it in the cloud likely is AWS Batch. Here's some more about it, and the kind of things you'd have to consider to use it well: https://aws.amazon.com/blogs/hpc/aws-batch-best-practices/ Twerk from Home fucked around with this message at 04:08 on Dec 19, 2023 |
# ? Dec 19, 2023 04:06 |
These threads are actually perfectly parallel-- the same model with slightly different starting values but otherwise siloed off of one another. The reason I'm thinking cloud is one: my boss really loves amortization and so will avoid big capital purchases and two: we can expect the actual workload each day to be 2 hours of 21 threads total for now, and that's only happening 80% of the time. Let's say 8 hours a week total, plus overhead involved in manually setting things up for now. One thing to keep in mind is we are a very small team, and while we're currently in the planning stages for a system to do batch processing. I can't stress enough of how much manual work will be done in setting these models off lol You vastly overestimate the sophistication at play here. I personally would much rather just get a 64 core threadripper server and run things off of that but since we'll need to access these models out in the field it seems easier to just do everything in aws rather than something like connect everything to the home network
|
|
# ? Dec 19, 2023 05:00 |
|
There's also the Continuous Integration/build engineering/devops thread, which deals with clouds a lot.
|
# ? Dec 19, 2023 05:30 |
|
Basic question time. I've been using rsync for random crap over the past year, generally for one-off moving a bunch of data around. I'm now trying to set up a thing with rsync doing what it's actually good for, ie syncing 2 locations, but it's really loving slow. Doing a dry run test: rsync --dry-run -vi -auh '/source/path' '/dest/path' >> filelist.txt Source and destination are already substantially the same. But in the txt file, every file is marked ">f..t......". The decoder ring for the -i headers says > means a file is transferred and t means timestamps are to blame. The timestamps are the same. (Well creation and modification are, I presume rsync isn't looking at access times? That would be stupid.) I have ~100 gigabytes of data but a relatively small number of updated files, so this should be a quick operation. But it's taking about as long as a complete copy.
|
# ? Dec 19, 2023 23:42 |
|
I think rsync is slow if you run it the first time even if the destinations are already synced. For the second run it has caches somewhere somehow and is fast.
|
# ? Dec 19, 2023 23:48 |
|
Do these filesystems have full-resolution timestamps?
|
# ? Dec 19, 2023 23:50 |
|
VictualSquid posted:I think rsync is slow if you run it the first time even if the destinations are already synced. For the second run it has caches somewhere somehow and is fast. this would be cool, but no. rsync uses file size and timestamp by default. OP, I also suspect your filesystems have different timestamp resolution.
|
# ? Dec 20, 2023 00:02 |
|
Yea, I remember having to use the --modify-window option sometimes when the source and destination have different file systems. Might give that a try.
|
# ? Dec 20, 2023 00:20 |
|
One FS is exfat, so maybe timestamps? But trying --modify-window=999 still gives me the list of all files having >f..t...... on them in the dry run. Hmmmm. VictualSquid posted:I think rsync is slow if you run it the first time even if the destinations are already synced. For the second run it has caches somewhere somehow and is fast. Huh. I'm not seeing much about this, but I'm guessing the caches are kinda ephemeral in that case? This is not the first time I've run this sync, but it's a thing I do frequently.
|
# ? Dec 20, 2023 00:47 |
|
rsync doesn't have any caching but it definitely sounds like there is something weird with your case as it's quite a common usage.
|
# ? Dec 20, 2023 00:55 |
|
Nah, I seemed to have misremembered about the caching. Easy mistake, as running rsync after the first one is normally comically fast that I can't believe it actually manages to check anything on disk that fast. Anyways your probably have a strange timestamp issue. Either granuality or timezones or something. And the first thing I would try is running rsync normally, and then see if that fixes things. e: oh, exfat. that is probably your problem there somehow.
|
# ? Dec 20, 2023 01:11 |
|
It uses a chunk of CPU so I think it's doing a checksum on every file, regardless of timestamp? WTF.man rsync posted:--times, -t -a includes -t but for some reason it still does the checksum on everything. Less Fat Luke posted:rsync doesn't have any caching but it definitely sounds like there is something weird with your case as it's quite a common usage. Yeah that's the main reason I'm posting, I figure I gotta be doing something wrong because this is what rsync does. The exfat source is also using MS Bitlocker, maybe that's the problem? Edit setting a --modify-window doesn't help, so I can only assume that it isn't seeing / believing the timestamps at all. Klyith fucked around with this message at 01:17 on Dec 20, 2023 |
# ? Dec 20, 2023 01:12 |
|
Just for reference, exfat apparently has 10 ms timestamp resolution, up from the 2 seconds of fat/fat32.
|
# ? Dec 20, 2023 01:14 |
|
Can you paste the relevant paths from the output of `mount` for us? Edit: Tested this locally on both MacOS and Linux w/ ZFS, this is the output from the Mac: code:
code:
Less Fat Luke fucked around with this message at 01:40 on Dec 20, 2023 |
# ? Dec 20, 2023 01:36 |
|
Rsync has flags to turn off timestamp checking and just use file size. I think you can turn on computing a checksum which is slow AF. Maybe exfat triggers the checksum by default.
|
# ? Dec 20, 2023 01:45 |
|
Klyith posted:The exfat source is also using MS Bitlocker, maybe that's the problem?
|
# ? Dec 20, 2023 01:46 |
|
rsync doesnt checksum unless you explicitly use the --checksum flag, it just checks file sizes and timestamps otherwise. --checksum, -c This changes the way rsync checks if the files have been changed and are in need of a transfer. Without this option, rsync uses a "quick check" that (by default) checks if each file's size and time of last modification match between the sender and receiver. This option changes this to compare a 128-bit checksum for each file that has a matching size. Generating the checksums means that both sides will expend a lot of disk I/O reading all the data in the files in the transfer, so this can slow things down significantly (and this is prior to any reading that will be done to transfer changed files) The sending side generates its checksums while it is doing the file-system scan that builds the list of the available files. The receiver generates its checksums when it is scanning for changed files, and will checksum any file that has the same size as the corresponding sender's file: files with either a changed size or a changed checksum are selected for transfer. Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by checking a whole-file checksum that is generated as the file is transferred, but that automatic after-the-transfer verification has nothing to do with this option's before- the-transfer "Does this file need to be updated?" check. The checksum used is auto-negotiated between the client and the server, but can be overridden using either the --checksum-choice (--cc) option or an environment variable that is discussed in that option's section.
|
# ? Dec 20, 2023 02:09 |
|
I believe they're doing rsync on two local filesystems where checksumming will not happen at all in the first place.
|
# ? Dec 20, 2023 02:13 |
|
Less Fat Luke posted:Can you paste the relevant paths from the output of `mount` for us? code:
Less Fat Luke posted:Whoaah I missed this, what? I had no idea you could mount Bitlocker-protected filesystems. Yeah it's very cool, provided by dm-crypt so afaik it's pretty solid. Integrates perfectly with KDE's standard mounting & unmounting system. Been using that for some media that is also used on my windows devices and it's much easier to use bitlocker on linux than anything else on Windows. MS was apparently pretty nice about Bitlocker, not open source but they published the full specs and methods. If this is the downside then I'll live, or maybe this will get me off my butt to switch my craptop over to linux as well. Less Fat Luke posted:I believe they're doing rsync on two local filesystems where checksumming will not happen at all in the first place. Yes, it's all local. Does it not fall back to Klyith fucked around with this message at 02:26 on Dec 20, 2023 |
# ? Dec 20, 2023 02:24 |
|
That's interesting, TIL. And yeah as far as I know there's no delta transfers for local rsync, it should always do the full file. I wonder if it could be something as simple as the Windows partition having the timestamps stored in the wrong timezone or something (related to Linux having the hardware clock in UTC versus Windows) but I have no idea at this point, sorry. Plus you said you've manually checked the timestamps in Linux so that should rule that out.
|
# ? Dec 20, 2023 02:30 |
|
Klyith posted:This is not the first time I've run this sync, but it's a thing I do frequently.
|
# ? Dec 20, 2023 03:13 |
|
I feel like I had a similar problem with rsync in the last while, but I don’t recall solving it, hmm
|
# ? Dec 21, 2023 03:41 |
|
cruft posted:I spent a couple years using acme exclusively. Here's a review! Thank you, this post is really cool I like plan9 stuff a lot -- i learned quite a bit about C from reading the source code for stuff like bio, and used mk many years before I had to learn GNU make. I like that this stuff is still out there. I'm surprised Unicode support isn't great, given the origins of utf8. Re better integration with the shell, multiple cursors, LSP, etc. -- have you tried kakoune? I don't know about its mouse story. It's really cool and comes with Clippy, which is very helpful when learning to use it: I didn't actually know how to use vim until I started using kak. Then I had a need to learn vim in places I couldn't run kak. I use vim now just cause it's on the server now, and haven't touched kak, but it's a lot more elegant and easy to learn than vim. e: shoutout to elegant unpopular stuff mawarannahr fucked around with this message at 04:42 on Dec 21, 2023 |
# ? Dec 21, 2023 04:32 |
|
NihilCredo posted:Take a look at rclone. It's very similar to rsync, but has some features that you may find useful (filtering by date is built-in, for example, and it backs up symlinks as text files). And when down the line the backup destination inevitably becomes some S3 cold storage bucket, you'll only have minimal changes to apply. Just wanted to say thanks, I did set up rclone and have had success with its SMB backend (very convenient that it can do so in userspace + seems to work better than fuse SMB). My plan is to use it as a Restic backend.
|
# ? Dec 21, 2023 04:35 |
|
Dunno if this is the right thread for this since my question is probably really base level compared to most, but: I've been considering switching over to Linux-- mostly because I'm in the mood for something different, but also because while I was fine with Windows 10, as it approaches the end of its lifespan, I'm not really interested in 11 from what I've seen so far. I've been dabbling with different distros on VMs for a while and the one I keep coming back to is Arch as I like the feeling of being able to completely personalize my experience. That's been all well and good-- I'm still learning and I've been enjoying the experience of learning, but I've been having a lot of trouble with anything that's Wayland-based. KDE with Wayland mostly works, but it feels really clunky compared to the X11 version. I also want to learn how to use a tiling window manager like Hyprland or Sway or i3, but I can't get the former two to work properly at all in a VM, and I'm concerned that they're going to be a pain when I install for real since I've got a Nvidia GPU. It'd be a bummer if I couldn't use the former two because I've seen some really nice clean setups with fluid animations and I'm a sucker for that kind of thing. Am I just better off staying away from Wayland until I get around to upgrading my computer so I can get an AMD GPU, or is it just the VM being weird and I'll be fine once I get the Nvidia drivers installed? Also, I'm planning on dual-booting with Windows still just in case there's a game I want to play or program I need to use that's having compatibility issues with Linux-- I'd be able to access files from either OS so long as I just push saved images/documents/videos/steam installs to the same directory on a separate partition, right?
|
# ? Dec 23, 2023 14:41 |
|
Mounting windows partitions in linux works fairly well these days. And WSL should be able to mount a linux partition, though I never actually tried it. Having a third drive/partition does work, the same as using a removable drive. Wayland is in a strange state where some things work much better then x11 and others work much less well. For kde specifically, the next version in january is supposed to add lots of features for wayland. EndeavorOS is arch with a sane installer, and you might try it. They also have usable defaults and a tutorial for one of the tiling wms. Though it didn't click for me and I went back to kde. This was on my laptop which has an nvidia gpu.
|
# ? Dec 23, 2023 14:59 |
|
I'm going to suggest you physically disconnect that Windows disk so you can feel free to drink around with distributions and reinstall a lot, without having to worry about accidentally nuking the OS you're familiar with. No I've never done that why would you assume such a thing
|
# ? Dec 23, 2023 15:16 |
|
mawarannahr posted:I'm surprised Unicode support isn't great, given the origins of utf8. Unicode support is great, but the way the font engine does things means you need one font with every glyph, which is not how any modern software wants the system to work. Instead of the tofu block for unknown glyphs, you get a tiny Russ Cox head.
|
# ? Dec 23, 2023 15:19 |
|
Framboise posted:Dunno if this is the right thread for this since my question is probably really base level compared to most, but: Is your computer a desktop and does it have integrated graphics as well? it's kind of a pita so I'm not sure I should suggest it but you could do gpu passthrough and use the integrated graphics for linux mystes fucked around with this message at 16:03 on Dec 23, 2023 |
# ? Dec 23, 2023 15:59 |
|
Framboise posted:Dunno if this is the right thread for this since my question is probably really base level compared to most, but: Wayland and nvidia arent there yet by all reports and sounds like your just confirming it. Maybe try it with the next big KDE release but I'm guessing you'll need to wait on the open source nvidia driver (NVK i think?) or some update by nvidia themselves. You can share Windows partitions on Linux pretty easily ya, but be aware it may not be as seamless as you hope, due to things like different line endings. Theres utils like dos2unix/unix2dos to easily and quickly change text files but if you plan on doing it back and forth repeatedly its not gonna be a streamlined experience.
|
# ? Dec 23, 2023 17:39 |
|
yeah if you have Nvidia just use the proprietary drivers tbh and stick to X11 for now at least.
|
# ? Dec 23, 2023 18:11 |
|
Oh I love leaving the house for two days. I woke up (at my parents' house) to two sets of messages from my partner - who is a reasonably competent IT dude himself. The oldest one was "weird, my PC doesn't turn on. Oh well I'll look at it later." The second one is "several of my partitions are just gone and heaps of files are randomly truncated; also it keeps greyscreening." For fun the damage is across both sata and nvme drives, and I think his Windows install is also dead. I'm quite sure his PC is cursed, because it has the weirdest issues and they didn't go away after a MB/RAM/PSU/GPU replacement last year. I'm beginning to wonder if the steel in his case was recycled from a war relic or something. (And no this is not a question, sorry. Just wanted to complain.) Computer viking fucked around with this message at 11:01 on Dec 24, 2023 |
# ? Dec 24, 2023 10:57 |
|
Mr. Crow posted:Wayland and nvidia arent there yet by all reports and sounds like your just confirming it. Maybe try it with the next big KDE release but I'm guessing you'll need to wait on the open source nvidia driver (NVK i think?) or some update by nvidia themselves. Windows cannot see Linux partitions (ext4 eg) natively so be aware of that. You'd want to have a Windows (NTFS) partition you mount from Linux not the other way around. The CR/LF thing with plain text files is a thing but generally if Windows sees a Unix-style (LF only) text file you created on the Linux side it won't gently caress around with it these days. Binary files such as jpegs or whatever shouldn't be an issue. Also if you haven't tried WSL2 on Windows, give it a go. It's basically real Linux running in a tightly integrated Windows VM and ngl it's pretty good these days.
|
# ? Dec 24, 2023 13:58 |
|
|
# ? Jun 6, 2024 13:55 |
|
I don't think I've seen a program gently caress up CR/LF in over a decade. One app I use on linux fails on text files encoded in Windows-1252 if they have particular non-ascii characters. I had a lot of recipes saved with ° or ½ characters that it barfed on. But that is ReText, not a standard widely-used text editor like Kate or Gnome Text. Normal apps, no problem.
|
# ? Dec 24, 2023 17:33 |