Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
cruft
Oct 25, 2007

Rawrbomb posted:

I'm not sure if I've understood your usecase, but VS Code with VS Code remote plugin, might be easy way of going about this? You can just SSH (via VS Code) into the machine, and then you get local editing support. https://code.visualstudio.com/docs/remote/remote-overview

Once they SSH into the machine through vs code, they'll get the VS Code file viewer. Which more or less works like it would locally. It has saved me a few times over trying to teach people SFTP or working with SSH directly.

Yes, thank you. I have this, too, in addition to a systemd-launched code-server I can connect to with my browser, so I don't have to run electron. I also have acme from Plan 9, but since it uses X11, the laptop won't go to sleep until I terminate the WSL2 container.

My use case is setting up a development environment for technical writers who run Windows and who are not normally Internet connected; and also forcing myself to use Windows because I really ought to understand it better.

I appreciate everyone trying to help! Mostly I just needed to complain :)

cruft fucked around with this message at 17:04 on Dec 18, 2023

Adbot
ADBOT LOVES YOU

mawarannahr
May 21, 2019

cruft posted:

Yes, thank you. I have this, too, in addition to a systemd-launched code-server I can connect to with my browser, so I don't have to run electron. I also have acme from Plan 9, but since it uses X11, the laptop won't go to sleep until I terminate the WSL2 container.

My use case is setting up a development environment for technical writers who run Windows and who are not normally Internet connected; and also forcing myself to use Windows because I really ought to understand it better.

I appreciate everyone trying to help! Mostly I just needed to complain :)

Technical writers are gonna love acme
plan9 owns how's your acme story?

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

Well, it's a hypervisor with a bit of thin film to paint over that fact.

WSL1 is the implementation with tight integration, because it's a syscall emulation layer similar to the FreeBSD linuxulator (which is presumably where Microsoft got the idea, when they were porting dtrace).

While it is extremely similar to the linuxlator, they probably got the idea from SFU/SUA and the even older POSIX subsystem which did the same "POSIX as a kernel personality" idea, just with "generic UNIX-y OS with unique quirks" instead of "Linux" as the target platform. It was around from 1993 to Windows 7, with 8 as the weird gap before WSL1 took over.

I don't really like working on windows, but the kernel side has some nice ideas - like the personalities. Which, yes, have a definite parallel on FreeBSD. Wasn't there also a way to run some other UNIX binaries in the older releases?

Computer viking fucked around with this message at 18:52 on Dec 18, 2023

cruft
Oct 25, 2007

mawarannahr posted:

plan9 owns how's your acme story?

I spent a couple years using acme exclusively. Here's a review!

Cool stuff:

  • The basic premise rules the school. I already know how to mess with text files in the shell, and it was refreshing not to have to learn yet another Domain Specific Language to do the same thing in my text editor.
  • "What if you could just highlight text and easily pipe it to whatever executable you want", while part of this, is cool enough it's worth mentioning in its own bullet point.
  • Being able to manipulate the text editor through files is also cool. It's probably cooler in plan9 where that filesystem is magically mounted, but using the 9p command wasn't too bad.
  • I actually like using a proportional font in my text editor now. Most of the time.

Not so cool stuff:

  • The fonts are not great. Unicode support is not great. You can work around this with some settings.
  • The font rendering isn't great either, even after you've changed to a different font. Glyph borrowing doesn't exist. It's just sort of rudimentary fonts.
  • You absolutely positively must have a mouse with 3 buttons, and it's very important to be able to precisely position the mouse pointer. I gave up and bought the ergo mouse the plan9 people like, and it was very nice. But using acme with a trackpad was lame, and using any other mouse was almost out of the question.
  • Make the up and down arrow work like every other damned thing ever made, for crying out loud.

Super annoying stuff:

  • After moving everything to Wayland, it was the only legacy X11 app I was using. This looked pretty bad on my HiDPI monitor (yes, I tried that. I tried that too. Yes, that too. Thank you for your suggestions.)
  • I'm glad Rob Pike can live in a world where \x09 is the only character needed for indenting code. I wish I could live there, too.
  • LSP is a really nice thing to have while writing code.
  • Even frickin' Go is now better in VSCode: I just start typing "fmt.Println", save the file, and it automagically imports the fmt module at the top. And runs "go fmt". And tells me about syntax errors.
  • Atom's ^D multiple cursor thing was a game changer for text editing. I don't think I could live without it at this point.

If somebody made an editor like Acme for Wayland and/or Windows, where I could just use sed and awk and bourne shell to do things, I might give it serious consideration. But at this point I don't think I can go back to Acme except for nostalgia's sake.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
do we have a cloud computing thread? My work has a parametric math model we run in c / python which takes 5-30 minutes to complete on the standard laptop we have depending on the model size, the number of course, the number of parameters etc. If we can get all 21 parameters calculated at the same time that would be great, but that either means getting a threadripper or xeon or using the bigger ec2 instances.


Actually since the parameters are perfectly independent, perhaps we could spool up 21 different tiny instances each one running a parameter...

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



There is the VM thread which would have a pretty heavy overlap with the cloud

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Watermelon Daiquiri posted:

do we have a cloud computing thread? My work has a parametric math model we run in c / python which takes 5-30 minutes to complete on the standard laptop we have depending on the model size, the number of course, the number of parameters etc. If we can get all 21 parameters calculated at the same time that would be great, but that either means getting a threadripper or xeon or using the bigger ec2 instances.


Actually since the parameters are perfectly independent, perhaps we could spool up 21 different tiny instances each one running a parameter...

I'm guessing you're asking because you have a lot of these things to run. How much resources can each one effectively use? How much RAM? How many threads? If you start testing them on bigger machines you'll quickly run into NUMA if the threads do a lot of synchronization or coordination with each other, which can make scaling to a large number of threads a pretty poor option vs just letting it run longer on 4-8 threads. You also have to choose if you care about high-value throughput or latency. How soon do you need results? Is it OK if there's a queue of work to be done waiting to be picked up while a cluster of workers chew through the work, or do you need things done ASAP and you're willing to pay for it?

If each individual parameter's workload is small (less than 10GB and 15 minutes on 2 cores), you could run it on a serverless solution like AWS Lambda. That'd be really easy scaling, because Amazon will happily let you run tens of thousands of these simultaneously with a minimum of fuss. There's a cost premium on this, but if your workload fits into it it's probably worth it so you don't have to have any persistent instances of any kind, and Amazon takes care of most of the harder stuff.

If you have tons of these to run, it's OK if they sit in a queue, and you want a more predictable budget while it's crunching on data, you're into job scheduler or batch processing territory. One relevant Amazon offering there is AWS Batch: https://aws.amazon.com/batch/features. It also has a serverless mode on AWS Fargate where you just have each job ask for a certain amount of CPU / Memory and it just gets it, no muss no fuss. Another option would be any modern job scheduler, which includes SLURM or Cromwell. AWS Parallelcluster can instantly set up a SLURM cluster for you that will accept jobs to run and schedule them across EC2 nodes. For your workload you probably won't need the unified filesystem, which is by far the most expensive part of Parallelcluster / SLURM on AWS.

I don't know how your organization's IT competency is, but all of these options are going to be vastly more expensive than buying some servers. In my opinion cloud offers much greater value for money for running services, especially when you need good CDN integration and consistent latency to users all over. Batch processing on the cloud still involves the big cost premium but you don't get as much benefit. You can cost-manage the cloud well with things like committed use agreements and up-front payment but it's a whole complex rats nest. I don't know how small and scrappy you are, but if these things are running on your laptops why not buy some Dell desktops and just let them rip? I can't share exact pricing but my organization can buy 16 core Dell desktops for well under $1k each. Strap 15 of those together into a SLURM cluster yourself and you're cooking with gas.

Edit: Amazon's cleanest fit for what you're wanting to do if you do it in the cloud likely is AWS Batch. Here's some more about it, and the kind of things you'd have to consider to use it well: https://aws.amazon.com/blogs/hpc/aws-batch-best-practices/

Twerk from Home fucked around with this message at 04:08 on Dec 19, 2023

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
These threads are actually perfectly parallel-- the same model with slightly different starting values but otherwise siloed off of one another. The reason I'm thinking cloud is one: my boss really loves amortization and so will avoid big capital purchases and two: we can expect the actual workload each day to be 2 hours of 21 threads total for now, and that's only happening 80% of the time. Let's say 8 hours a week total, plus overhead involved in manually setting things up for now.

One thing to keep in mind is we are a very small team, and while we're currently in the planning stages for a system to do batch processing. I can't stress enough of how much manual work will be done in setting these models off lol You vastly overestimate the sophistication at play here. I personally would much rather just get a 64 core threadripper server and run things off of that but since we'll need to access these models out in the field it seems easier to just do everything in aws rather than something like connect everything to the home network

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
There's also the Continuous Integration/build engineering/devops thread, which deals with clouds a lot.

Klyith
Aug 3, 2007

GBS Pledge Week
Basic question time. I've been using rsync for random crap over the past year, generally for one-off moving a bunch of data around. I'm now trying to set up a thing with rsync doing what it's actually good for, ie syncing 2 locations, but it's really loving slow.

Doing a dry run test:
rsync --dry-run -vi -auh '/source/path' '/dest/path' >> filelist.txt

Source and destination are already substantially the same. But in the txt file, every file is marked ">f..t......". The decoder ring for the -i headers says > means a file is transferred and t means timestamps are to blame. The timestamps are the same. (Well creation and modification are, I presume rsync isn't looking at access times? That would be stupid.)


I have ~100 gigabytes of data but a relatively small number of updated files, so this should be a quick operation. But it's taking about as long as a complete copy.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
I think rsync is slow if you run it the first time even if the destinations are already synced. For the second run it has caches somewhere somehow and is fast.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Do these filesystems have full-resolution timestamps?

cruft
Oct 25, 2007

VictualSquid posted:

I think rsync is slow if you run it the first time even if the destinations are already synced. For the second run it has caches somewhere somehow and is fast.

this would be cool, but no. rsync uses file size and timestamp by default.

OP, I also suspect your filesystems have different timestamp resolution.

80k
Jul 3, 2004

careful!
Yea, I remember having to use the --modify-window option sometimes when the source and destination have different file systems. Might give that a try.

Klyith
Aug 3, 2007

GBS Pledge Week
One FS is exfat, so maybe timestamps?

But trying --modify-window=999 still gives me the list of all files having >f..t...... on them in the dry run. Hmmmm.


VictualSquid posted:

I think rsync is slow if you run it the first time even if the destinations are already synced. For the second run it has caches somewhere somehow and is fast.

Huh. I'm not seeing much about this, but I'm guessing the caches are kinda ephemeral in that case? This is not the first time I've run this sync, but it's a thing I do frequently.

Less Fat Luke
May 23, 2003

Exciting Lemon
rsync doesn't have any caching but it definitely sounds like there is something weird with your case as it's quite a common usage.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
Nah, I seemed to have misremembered about the caching.
Easy mistake, as running rsync after the first one is normally comically fast that I can't believe it actually manages to check anything on disk that fast.

Anyways your probably have a strange timestamp issue. Either granuality or timezones or something. And the first thing I would try is running rsync normally, and then see if that fixes things.
e: oh, exfat. that is probably your problem there somehow.

Klyith
Aug 3, 2007

GBS Pledge Week
It uses a chunk of CPU so I think it's doing a checksum on every file, regardless of timestamp? WTF.

man rsync posted:

--times, -t

This tells rsync to transfer modification times along with the files and update them on the remote system. Note that if this option is not used, the optimization that excludes files that have not been modified cannot be effective; in other words, a missing -t (or -a) will cause the next transfer to behave as if it used --ignore-times (-I), causing all files to be updated (though rsync’s delta-transfer algorithm will make the update fairly efficient if the files haven’t actually changed, you’re much better off using -t).

-a includes -t but for some reason it still does the checksum on everything.

Less Fat Luke posted:

rsync doesn't have any caching but it definitely sounds like there is something weird with your case as it's quite a common usage.

Yeah that's the main reason I'm posting, I figure I gotta be doing something wrong because this is what rsync does.

The exfat source is also using MS Bitlocker, maybe that's the problem?


Edit setting a --modify-window doesn't help, so I can only assume that it isn't seeing / believing the timestamps at all.

Klyith fucked around with this message at 01:17 on Dec 20, 2023

Computer viking
May 30, 2011
Now with less breakage.

Just for reference, exfat apparently has 10 ms timestamp resolution, up from the 2 seconds of fat/fat32.

Less Fat Luke
May 23, 2003

Exciting Lemon
Can you paste the relevant paths from the output of `mount` for us?

Edit: Tested this locally on both MacOS and Linux w/ ZFS, this is the output from the Mac:
code:
luke@Lukes-MBA-2022  ~ #  rsync -vi -auh ~/Test/source ~/Test/dest
sending incremental file list
cd+++++++++ source/
>f+++++++++ source/CCCS-13847751_2022-01-01.pdf
>f+++++++++ source/CCCS-13847751_2022-02-01.pdf
>f+++++++++ source/CCCS-13847751_2022-03-01.pdf
>f+++++++++ source/CCCS-13847751_2022-04-01.pdf
>f+++++++++ source/CCCS-13847751_2022-05-01.pdf
>f+++++++++ source/CCCS-13847751_2022-06-01.pdf
>f+++++++++ source/CCCS-13847751_2022-07-01.pdf
>f+++++++++ source/CCCS-13847751_2022-08-01.pdf
>f+++++++++ source/CCCS-13847751_2022-09-01.pdf
>f+++++++++ source/CCCS-13847751_2022-10-01.pdf
>f+++++++++ source/CCCS-13847751_2022-11-01.pdf
>f+++++++++ source/CCCS-13847751_2022-12-01.pdf

sent 1.05M bytes  received 248 bytes  2.11M bytes/sec
total size is 1.05M  speedup is 1.00
luke@Lukes-MBA-2022  ~ #  rsync -vi -auh ~/Test/source ~/Test/dest
sending incremental file list

sent 342 bytes  received 17 bytes  718.00 bytes/sec
total size is 1.05M  speedup is 2,930.51
Definitely something is hosed with the timestamps. If I touch one file in source I get:
code:
sending incremental file list
>f..t...... source/CCCS-13847751_2022-11-01.pdf

Less Fat Luke fucked around with this message at 01:40 on Dec 20, 2023

cruft
Oct 25, 2007

Rsync has flags to turn off timestamp checking and just use file size. I think you can turn on computing a checksum which is slow AF. Maybe exfat triggers the checksum by default.

Less Fat Luke
May 23, 2003

Exciting Lemon

Klyith posted:

The exfat source is also using MS Bitlocker, maybe that's the problem?
Whoaah I missed this, what? I had no idea you could mount Bitlocker-protected filesystems. Whatever is doing that mounting is probably at fault.

Mr. Crow
May 22, 2008

Snap City mayor for life
rsync doesnt checksum unless you explicitly use the --checksum flag, it just checks file sizes and timestamps otherwise.

--checksum, -c
This changes the way rsync checks if the files have been changed and are in need of a transfer. Without this option, rsync uses a "quick check" that (by default) checks if each file's size and time of last modification match between the sender and receiver. This option changes this to compare a 128-bit checksum for each file that has a matching size. Generating the checksums means that both sides will expend a lot of disk I/O reading all the data in the files in the transfer, so this can slow things down significantly (and this is prior to any reading that will be done to transfer changed files) The sending side generates its checksums while it is doing the file-system scan that builds the list of the available files. The receiver generates its checksums when it is scanning for changed files, and will checksum any file that has the same size as the corresponding sender's file: files with either a changed size or a changed checksum are selected for transfer. Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by checking a whole-file checksum that is generated as the file is transferred, but that automatic after-the-transfer verification has nothing to do with this option's before- the-transfer "Does this file need to be updated?" check. The checksum used is auto-negotiated between the client and the server, but can be overridden using either the --checksum-choice (--cc) option or an environment variable that is discussed in that option's section.

Less Fat Luke
May 23, 2003

Exciting Lemon
I believe they're doing rsync on two local filesystems where checksumming will not happen at all in the first place.

Klyith
Aug 3, 2007

GBS Pledge Week

Less Fat Luke posted:

Can you paste the relevant paths from the output of `mount` for us?

code:
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
dev on /dev type devtmpfs (rw,nosuid,relatime,size=16372196k,nr_inodes=4093049,mode=755,inode64)
run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755,inode64)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
/dev/nvme1n1p5 on / type ext4 (rw,noatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14365)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,nosuid,nodev,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,size=16392468k,nr_inodes=1048576,inode64)
/dev/nvme1n1p3 on /home type ext4 (rw,noatime)
/dev/nvme1n1p1 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
/dev/nvme1n1p2 on /home/data type btrfs (rw,noatime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/)
/dev/nvme0n1p1 on /home/games type ext4 (rw,noatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=3278492k,nr_inodes=819623,mode=700,uid=1000,gid=1000,inode64)
portal on /run/user/1000/doc type fuse.portal (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
/dev/mapper/luks-fueris on /home/fueris type ext4 (rw,nosuid,nodev,noexec,noatime,user=luap)
/dev/mapper/bitlk-3cb2c533-8c46-45a3-b732-86cedeeda65f on /run/media/klyith/SDX type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)
That last one is the source exfat/bitlocker and the destination is just my home, ext4.

Less Fat Luke posted:

Whoaah I missed this, what? I had no idea you could mount Bitlocker-protected filesystems.

Yeah it's very cool, provided by dm-crypt so afaik it's pretty solid. Integrates perfectly with KDE's standard mounting & unmounting system. Been using that for some media that is also used on my windows devices and it's much easier to use bitlocker on linux than anything else on Windows. MS was apparently pretty nice about Bitlocker, not open source but they published the full specs and methods.

If this is the downside then I'll live, or maybe this will get me off my butt to switch my craptop over to linux as well.

Less Fat Luke posted:

I believe they're doing rsync on two local filesystems where checksumming will not happen at all in the first place.

Yes, it's all local. Does it not fall back to checksums "delta transfers" if timestamps are inadequate? Or is that only for remote transfers?

Klyith fucked around with this message at 02:26 on Dec 20, 2023

Less Fat Luke
May 23, 2003

Exciting Lemon
That's interesting, TIL. And yeah as far as I know there's no delta transfers for local rsync, it should always do the full file.

I wonder if it could be something as simple as the Windows partition having the timestamps stored in the wrong timezone or something (related to Linux having the hardware clock in UTC versus Windows) but I have no idea at this point, sorry. Plus you said you've manually checked the timestamps in Linux so that should rule that out.

ExcessBLarg!
Sep 1, 2001

Klyith posted:

This is not the first time I've run this sync, but it's a thing I do frequently.
Was the last time you ran this sync before the most recent DST transition?

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I feel like I had a similar problem with rsync in the last while, but I don’t recall solving it, hmm

mawarannahr
May 21, 2019

cruft posted:

I spent a couple years using acme exclusively. Here's a review!

Cool stuff:

  • The basic premise rules the school. I already know how to mess with text files in the shell, and it was refreshing not to have to learn yet another Domain Specific Language to do the same thing in my text editor.
  • "What if you could just highlight text and easily pipe it to whatever executable you want", while part of this, is cool enough it's worth mentioning in its own bullet point.
  • Being able to manipulate the text editor through files is also cool. It's probably cooler in plan9 where that filesystem is magically mounted, but using the 9p command wasn't too bad.
  • I actually like using a proportional font in my text editor now. Most of the time.

Not so cool stuff:

  • The fonts are not great. Unicode support is not great. You can work around this with some settings.
  • The font rendering isn't great either, even after you've changed to a different font. Glyph borrowing doesn't exist. It's just sort of rudimentary fonts.
  • You absolutely positively must have a mouse with 3 buttons, and it's very important to be able to precisely position the mouse pointer. I gave up and bought the ergo mouse the plan9 people like, and it was very nice. But using acme with a trackpad was lame, and using any other mouse was almost out of the question.
  • Make the up and down arrow work like every other damned thing ever made, for crying out loud.

Super annoying stuff:

  • After moving everything to Wayland, it was the only legacy X11 app I was using. This looked pretty bad on my HiDPI monitor (yes, I tried that. I tried that too. Yes, that too. Thank you for your suggestions.)
  • I'm glad Rob Pike can live in a world where \x09 is the only character needed for indenting code. I wish I could live there, too.
  • LSP is a really nice thing to have while writing code.
  • Even frickin' Go is now better in VSCode: I just start typing "fmt.Println", save the file, and it automagically imports the fmt module at the top. And runs "go fmt". And tells me about syntax errors.
  • Atom's ^D multiple cursor thing was a game changer for text editing. I don't think I could live without it at this point.

If somebody made an editor like Acme for Wayland and/or Windows, where I could just use sed and awk and bourne shell to do things, I might give it serious consideration. But at this point I don't think I can go back to Acme except for nostalgia's sake.

Thank you, this post is really cool :) I like plan9 stuff a lot -- i learned quite a bit about C from reading the source code for stuff like bio, and used mk many years before I had to learn GNU make. I like that this stuff is still out there.

I'm surprised Unicode support isn't great, given the origins of utf8.

Re better integration with the shell, multiple cursors, LSP, etc. -- have you tried kakoune? I don't know about its mouse story. It's really cool and comes with Clippy, which is very helpful when learning to use it:

I didn't actually know how to use vim until I started using kak. Then I had a need to learn vim in places I couldn't run kak. I use vim now just cause it's on the server now, and haven't touched kak, but it's a lot more elegant and easy to learn than vim.

e: shoutout to elegant unpopular stuff

mawarannahr fucked around with this message at 04:42 on Dec 21, 2023

mawarannahr
May 21, 2019

NihilCredo posted:

Take a look at rclone. It's very similar to rsync, but has some features that you may find useful (filtering by date is built-in, for example, and it backs up symlinks as text files). And when down the line the backup destination inevitably becomes some S3 cold storage bucket, you'll only have minimal changes to apply.

For the 'ghost files', it shouldn't be too complicated to solve that with a script. Set rclone or whatever other tool you choose to log all its actions in a structured format, find every instance of a file being deleted from the source, then create an empty filename.BACKEDUP placeholder. Exclude all 0-sized .BACKEDUP files from the main job.

Just wanted to say thanks, I did set up rclone and have had success with its SMB backend (very convenient that it can do so in userspace + seems to work better than fuse SMB). My plan is to use it as a Restic backend.

Framboise
Sep 21, 2014

To make yourself feel better, you make it so you'll never give in to your forevers and live for always.


Lipstick Apathy
Dunno if this is the right thread for this since my question is probably really base level compared to most, but:

I've been considering switching over to Linux-- mostly because I'm in the mood for something different, but also because while I was fine with Windows 10, as it approaches the end of its lifespan, I'm not really interested in 11 from what I've seen so far. I've been dabbling with different distros on VMs for a while and the one I keep coming back to is Arch as I like the feeling of being able to completely personalize my experience. That's been all well and good-- I'm still learning and I've been enjoying the experience of learning, but I've been having a lot of trouble with anything that's Wayland-based. KDE with Wayland mostly works, but it feels really clunky compared to the X11 version. I also want to learn how to use a tiling window manager like Hyprland or Sway or i3, but I can't get the former two to work properly at all in a VM, and I'm concerned that they're going to be a pain when I install for real since I've got a Nvidia GPU. It'd be a bummer if I couldn't use the former two because I've seen some really nice clean setups with fluid animations and I'm a sucker for that kind of thing.

Am I just better off staying away from Wayland until I get around to upgrading my computer so I can get an AMD GPU, or is it just the VM being weird and I'll be fine once I get the Nvidia drivers installed?

Also, I'm planning on dual-booting with Windows still just in case there's a game I want to play or program I need to use that's having compatibility issues with Linux-- I'd be able to access files from either OS so long as I just push saved images/documents/videos/steam installs to the same directory on a separate partition, right?

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
Mounting windows partitions in linux works fairly well these days. And WSL should be able to mount a linux partition, though I never actually tried it. Having a third drive/partition does work, the same as using a removable drive.

Wayland is in a strange state where some things work much better then x11 and others work much less well. For kde specifically, the next version in january is supposed to add lots of features for wayland.

EndeavorOS is arch with a sane installer, and you might try it. They also have usable defaults and a tutorial for one of the tiling wms. Though it didn't click for me and I went back to kde. This was on my laptop which has an nvidia gpu.

cruft
Oct 25, 2007

I'm going to suggest you physically disconnect that Windows disk so you can feel free to drink around with distributions and reinstall a lot, without having to worry about accidentally nuking the OS you're familiar with.

No I've never done that why would you assume such a thing

cruft
Oct 25, 2007

mawarannahr posted:

I'm surprised Unicode support isn't great, given the origins of utf8.

Unicode support is great, but the way the font engine does things means you need one font with every glyph, which is not how any modern software wants the system to work.

Instead of the tofu block for unknown glyphs, you get a tiny Russ Cox head. :classiclol:

mystes
May 31, 2006

Framboise posted:

Dunno if this is the right thread for this since my question is probably really base level compared to most, but:

I've been considering switching over to Linux-- mostly because I'm in the mood for something different, but also because while I was fine with Windows 10, as it approaches the end of its lifespan, I'm not really interested in 11 from what I've seen so far. I've been dabbling with different distros on VMs for a while and the one I keep coming back to is Arch as I like the feeling of being able to completely personalize my experience. That's been all well and good-- I'm still learning and I've been enjoying the experience of learning, but I've been having a lot of trouble with anything that's Wayland-based. KDE with Wayland mostly works, but it feels really clunky compared to the X11 version. I also want to learn how to use a tiling window manager like Hyprland or Sway or i3, but I can't get the former two to work properly at all in a VM, and I'm concerned that they're going to be a pain when I install for real since I've got a Nvidia GPU. It'd be a bummer if I couldn't use the former two because I've seen some really nice clean setups with fluid animations and I'm a sucker for that kind of thing.

Am I just better off staying away from Wayland until I get around to upgrading my computer so I can get an AMD GPU, or is it just the VM being weird and I'll be fine once I get the Nvidia drivers installed?

Also, I'm planning on dual-booting with Windows still just in case there's a game I want to play or program I need to use that's having compatibility issues with Linux-- I'd be able to access files from either OS so long as I just push saved images/documents/videos/steam installs to the same directory on a separate partition, right?
The proprietary nvidia drivers suck with wayland and sway won't work with them at all. I'm not sure what the state of the open source drivers is. I would suggest sticking with x11 and i3 for now unless you want to switch to an amd gpu

Is your computer a desktop and does it have integrated graphics as well? it's kind of a pita so I'm not sure I should suggest it but you could do gpu passthrough and use the integrated graphics for linux

mystes fucked around with this message at 16:03 on Dec 23, 2023

Mr. Crow
May 22, 2008

Snap City mayor for life

Framboise posted:

Dunno if this is the right thread for this since my question is probably really base level compared to most, but:

I've been considering switching over to Linux-- mostly because I'm in the mood for something different, but also because while I was fine with Windows 10, as it approaches the end of its lifespan, I'm not really interested in 11 from what I've seen so far. I've been dabbling with different distros on VMs for a while and the one I keep coming back to is Arch as I like the feeling of being able to completely personalize my experience. That's been all well and good-- I'm still learning and I've been enjoying the experience of learning, but I've been having a lot of trouble with anything that's Wayland-based. KDE with Wayland mostly works, but it feels really clunky compared to the X11 version. I also want to learn how to use a tiling window manager like Hyprland or Sway or i3, but I can't get the former two to work properly at all in a VM, and I'm concerned that they're going to be a pain when I install for real since I've got a Nvidia GPU. It'd be a bummer if I couldn't use the former two because I've seen some really nice clean setups with fluid animations and I'm a sucker for that kind of thing.

Am I just better off staying away from Wayland until I get around to upgrading my computer so I can get an AMD GPU, or is it just the VM being weird and I'll be fine once I get the Nvidia drivers installed?

Also, I'm planning on dual-booting with Windows still just in case there's a game I want to play or program I need to use that's having compatibility issues with Linux-- I'd be able to access files from either OS so long as I just push saved images/documents/videos/steam installs to the same directory on a separate partition, right?

Wayland and nvidia arent there yet by all reports and sounds like your just confirming it. Maybe try it with the next big KDE release but I'm guessing you'll need to wait on the open source nvidia driver (NVK i think?) or some update by nvidia themselves.

You can share Windows partitions on Linux pretty easily ya, but be aware it may not be as seamless as you hope, due to things like different line endings. Theres utils like dos2unix/unix2dos to easily and quickly change text files but if you plan on doing it back and forth repeatedly its not gonna be a streamlined experience.

ziasquinn
Jan 1, 2006

Fallen Rib
yeah if you have Nvidia just use the proprietary drivers tbh and stick to X11 for now at least.

Computer viking
May 30, 2011
Now with less breakage.

Oh I love leaving the house for two days. I woke up (at my parents' house) to two sets of messages from my partner - who is a reasonably competent IT dude himself. The oldest one was "weird, my PC doesn't turn on. Oh well I'll look at it later."

The second one is "several of my partitions are just gone and heaps of files are randomly truncated; also it keeps greyscreening." For fun the damage is across both sata and nvme drives, and I think his Windows install is also dead. I'm quite sure his PC is cursed, because it has the weirdest issues and they didn't go away after a MB/RAM/PSU/GPU replacement last year. I'm beginning to wonder if the steel in his case was recycled from a war relic or something.

(And no this is not a question, sorry. Just wanted to complain.)

Computer viking fucked around with this message at 11:01 on Dec 24, 2023

feedmegin
Jul 30, 2008

Mr. Crow posted:

Wayland and nvidia arent there yet by all reports and sounds like your just confirming it. Maybe try it with the next big KDE release but I'm guessing you'll need to wait on the open source nvidia driver (NVK i think?) or some update by nvidia themselves.

You can share Windows partitions on Linux pretty easily ya, but be aware it may not be as seamless as you hope, due to things like different line endings. Theres utils like dos2unix/unix2dos to easily and quickly change text files but if you plan on doing it back and forth repeatedly its not gonna be a streamlined experience.

Windows cannot see Linux partitions (ext4 eg) natively so be aware of that. You'd want to have a Windows (NTFS) partition you mount from Linux not the other way around.

The CR/LF thing with plain text files is a thing but generally if Windows sees a Unix-style (LF only) text file you created on the Linux side it won't gently caress around with it these days. Binary files such as jpegs or whatever shouldn't be an issue.

Also if you haven't tried WSL2 on Windows, give it a go. It's basically real Linux running in a tightly integrated Windows VM and ngl it's pretty good these days.

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week
I don't think I've seen a program gently caress up CR/LF in over a decade.

One app I use on linux fails on text files encoded in Windows-1252 if they have particular non-ascii characters. I had a lot of recipes saved with ° or ½ characters that it barfed on. But that is ReText, not a standard widely-used text editor like Kate or Gnome Text. Normal apps, no problem.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply