Realtek giving people problems? Why I never!
|
|
# ? Sep 23, 2022 08:48 |
|
|
# ? Jun 13, 2024 04:51 |
|
Klyith posted:Hmmm, this may mean that you have both pipewire and pulseaudio installed at the same time (bad). Or just that pipewire's pulse compatibility was loaded when you ran inxi, because you were playing audio from something that wanted pulse at the time (fine and normal). Just as an alternative, if you are connected to your monitor using HDMI or DP, and your monitor has a 3.5mm out, then you could set the default device to be the Nvidia card, and use that for audio. I've done that for years instead of mucking about with the motherboard audio.
|
# ? Sep 23, 2022 09:53 |
|
Internet Explorer posted:Is this due to having a VPN client on your work machine? I'm fairly clueless when it comes to this stuff, but that's what I noticed. VPN connection enabled, no network. VPN connection disabled, network works fine. Most likely due to VPN but it's pretty weird that the networking works flawlessly on WSLv1 but not on v2. There's also a wsl-vpnkit, but I'm not going to run random software from github on my company issued machine. Workaround is spinning up an EC2 instance and use that as a dev machine. I can run that on demand 24/7 for a few years and it'll still be cheaper than trying to get WSL v2 working.
|
# ? Sep 23, 2022 10:40 |
|
Klyith posted:Hooooo kay, the actual audio component on the mobo is a Realtek ALC4080, which is brand new and giving people horrible problems everywhere. (even on windows!) Welp that explains it, I have been wracking my brain trying various dubious things I found while googling in relation to issues others have had. Thank you for this, I really appreciate it. Guess I am going to have to check in on it now and then to see if it has been fixed. In regard to the other question I can do that later on, this is a fresh install of manjaro so I havent really done any fuckery yet, so if those things are running at the same time I dunno why.
|
# ? Sep 23, 2022 11:02 |
|
I do some volunteer work that involves docx and pptx files. At the end of the week, I'd like to purge all of these type of files from a directory using crontab. My commands 0 14 * * 0 rm *.docx ~/Documents/foo 1 14 * * 0 rm *.pptx ~/Documents/foo/bar didn't work, and when I typed the same command in my terminal it complained that there were no .docx files and that Documents/foo is a directory. Is there another set of commands that would work without deleting the entire directory?
|
# ? Sep 25, 2022 19:15 |
|
Did you do ‘crontab -e’ or ‘sudo crontab -e’ when making the jobs?
|
# ? Sep 25, 2022 19:36 |
|
Warbird posted:Did you do ‘crontab -e’ or ‘sudo crontab -e’ when making the jobs? Yes. On my system it takes me straight to the nano file for editing.
|
# ? Sep 25, 2022 19:37 |
|
Which one? Sudo or non sudo? What is the crontab command you’re using?
|
# ? Sep 25, 2022 19:43 |
|
F_Shit_Fitzgerald posted:I do some volunteer work that involves docx and pptx files. At the end of the week, I'd like to purge all of these type of files from a directory using crontab. My commands That's not how the rm command works. Not even in windows (had to look to make sure), the del command doesn't work like that. What you probably want is rm ~/Documents/foo/*.docx as a command line. As a crontab, I'd advise making a shell script: code:
|
# ? Sep 25, 2022 19:45 |
It's the difference between per-user crontab and system crontab, which in turn controls which user the commands are run as (although the system crontab also lets you specify user:group).
|
|
# ? Sep 25, 2022 19:45 |
|
Edit. Sorry, wasn't caught up
|
# ? Sep 25, 2022 19:45 |
|
Seconding volguus idea, though if you really really don't want to make it a separate script then you can wrap your commands with bash like this: bash -c 'rm /home/your name/foo/*.docx; rm /home/your name/foo/bar/*.pptx'
|
# ? Sep 25, 2022 19:49 |
|
Warbird posted:Which one? Sudo or non sudo? What is the crontab command you’re using? crontab -e Next time I'll use sudo crontab -e. Volguus posted:That's not how the rm command works. Not even in windows (had to look to make sure), the del command doesn't work like that. What you probably want is rm ~/Documents/foo/*.docx as a command line. As a crontab, I'd advise making a shell script: Oh. See, I thought it had to be structured that way for cron to be able to interpret it. A shell script would be ideal because I need to learn bash anyway. Thanks for all the replies. There's so much I have yet to learn in Linux... F_Shit_Fitzgerald fucked around with this message at 20:06 on Sep 25, 2022 |
# ? Sep 25, 2022 20:04 |
|
Do you really want to delete those files on a schedule? What about moving them all to a folder named after the current date which then allows you to delete that folder manually or write another script to delete it a week later? I'm just thinking of an edge case where your work takes longer than expected and runs over into the following week.
|
# ? Sep 25, 2022 20:29 |
|
I probably would use logrotate or another existing script to handle that sort of thing.
|
# ? Sep 25, 2022 20:34 |
|
F_Shit_Fitzgerald posted:crontab -e Just to elaborate, when you're using 'sudo', you are kind of changing to the 'root' user and then running the command. This is only necessary if you need to do things that your own user is not permitted to do. When you run 'sudo crontab -e', you are then editing the crontab of the 'root' user. If you only want to change or alter your own files, you should probably not use sudo -- you already have all permissions on your own files. When you run 'crontab -e', you are editing your own crontab, and commands run from it will run with your own permissions. For deleting your own files, this is almost certainly what you want.
|
# ? Sep 25, 2022 20:54 |
In most cases, it's a better idea to fix permissions so that you don't need to run a privileged command in order to do something. Very very few things need to run as root.
|
|
# ? Sep 25, 2022 21:30 |
|
Accidentally running an rm -rf while located in root, for instance. Something which I tried in a VM recently and was pleasantly surprised to see that if you try that these days it will error out and ask you to provide a flag like --no-really-this-is-not-an-accident before it will do it.
|
# ? Sep 25, 2022 21:39 |
|
Thanks Ants posted:Do you really want to delete those files on a schedule? What about moving them all to a folder named after the current date which then allows you to delete that folder manually or write another script to delete it a week later? The nature of this work is that the docx and pptx files change week by week. Is there another package that could automate this other than cron? kujeger posted:Just to elaborate, when you're using 'sudo', you are kind of changing to the 'root' user and then running the command. This is only necessary if you need to do things that your own user is not permitted to do. Ah, OK. I was dumb and didn't realize that different crontabs exist for the user and for the root.
|
# ? Sep 25, 2022 22:20 |
|
F_Shit_Fitzgerald posted:The nature of this work is that the docx and pptx files change week by week. Is there another package that could automate this other than cron? This seems like something that could be cron-ed basically tar -cvf /home/fitzgerald/Documents/backups/$(date +%F).tar /home/fitzgerald/Documents/foo && rm -fr /home/fitzgerald/Documents/foo/* in a .sh file With edits based on what else you need to be dealt with. And zip flags for bzip/gzip/xz and changing the tar extension as necessary.
|
# ? Sep 25, 2022 22:29 |
|
F_Shit_Fitzgerald posted:Ah, OK. I was dumb and didn't realize that different crontabs exist for the user and for the root. Sorry, yeah that’s what I was getting at. Since you were using the tilde home shortcut occam’s was that you were running as a different user than expected by way of the root cron. I’m still curious why it worked in your CLI interface as the command shouldn’t work when run in the way you posted it. This said I just assumed there was some sort of [pattern] [target] mode for rm that I wasn’t privy to because Linux.
|
# ? Sep 26, 2022 03:45 |
|
Warbird posted:Sorry, yeah that’s what I was getting at. Since you were using the tilde home shortcut occam’s was that you were running as a different user than expected by way of the root cron. I’m still curious why it worked in your CLI interface as the command shouldn’t work when run in the way you posted it. This said I just assumed there was some sort of [pattern] [target] mode for rm that I wasn’t privy to because Linux. rm accepts multiple files rm *.docx ~/Documents/foo would delete all docx in the current directory, and then attempt to delete foo which it should complain is a directory.
|
# ? Sep 26, 2022 04:02 |
|
Well there ya go
|
# ? Sep 26, 2022 04:34 |
|
Work stuff: If you had a Linux kernel developer that was having to actually work on the kernel itself for your company, do you expect them to also be working on drivers and other applications? I would think having to mingle in the latest kernel code and balancing your own changes with it would be enough of a job for one person. But maybe that's not normal and these people are just kind of hitting the kernel as a side gig of their "linux kernel developer" job?
|
# ? Sep 26, 2022 06:56 |
|
Rocko Bonaparte posted:Work stuff: If you had a Linux kernel developer that was having to actually work on the kernel itself for your company, do you expect them to also be working on drivers and other applications? I would think having to mingle in the latest kernel code and balancing your own changes with it would be enough of a job for one person. But maybe that's not normal and these people are just kind of hitting the kernel as a side gig of their "linux kernel developer" job? There should be a PM, there should be a clear list of tasks that the person has to do and an estimate of those tasks by that person. Monday, I'm working on the kernel fixing bug 123. Tuesday, on application X, adding feature 456. etc. That also means that Tuesday there's no work being done on the kernel, and if bug 123 takes longer than 1 day then it just doesn't get done, until probably Wednesday. At the end of the day whoever signs the cheques specifies what's important to them and what are their priorities, right?
|
# ? Sep 26, 2022 12:38 |
|
Rocko Bonaparte posted:Work stuff: If you had a Linux kernel developer that was having to actually work on the kernel itself for your company, do you expect them to also be working on drivers and other applications? What are the "drivers" if not in-tree? As far as applications go, ancillary C user-space utilities that interface with the drivers absolutely make sense. Think anything that would belong in util-linux in a general-purpose system. But if "applications" are like Java GUI poo poo I wouldn't expect a kernel person necessarily to do that. Rocko Bonaparte posted:I would think having to mingle in the latest kernel code and balancing your own changes with it would be enough of a job for one person. But maybe that's not normal and these people are just kind of hitting the kernel as a side gig of their "linux kernel developer" job? Generally speaking, there's going to be many more folks of the former description for which actual kernel work is only 10-20% of their job responsibility.
|
# ? Sep 26, 2022 15:11 |
ExcessBLarg! posted:What are the "drivers" if not in-tree?
|
|
# ? Sep 26, 2022 15:18 |
|
There are also userspace drivers made with uio_pci_generic or vfio-pci. Those are minimal drivers that, when assigned as the driver of a device, allow userspace code to directly access the device. But you don't need to be a kernel specialist to write those so I don't know how much that would apply.
|
# ? Sep 26, 2022 17:26 |
|
Is there some way to make Ubuntu on Hyper V faster? I have a 12700K and RTX 3080, and Windows 11. Ubuntu is 22.04.1 LTS and kernel version is linux 5.15.0.1020-azure. Resolution is 50Hz at 3840x2160. I have no idea why it's only 50Hz or how to increase it. My monitor supports up to 120Hz but it would be nice if the inteface wasn't so slow. Like if I max a firefox window it takes maybe a second or two while it slowly renders the window again. It ran fine on Win10 + 2560x1440. I upgraded the OS and monitor and lag began.
|
# ? Sep 28, 2022 09:39 |
|
Well I've discovered something lovely. I tried to perform a back-up of some data and found that the output is completely bollocksed. Turns out btrfs (don't ask me I didn't pick it) reuses inums across subvolumes, so the archive assumes that same inum = hardlink, don't bother saving the file again. Any ideas how to get past this?
|
# ? Sep 28, 2022 13:52 |
|
The obvious thing to do sounds like making one backup archive per subvolume.
|
# ? Sep 28, 2022 14:04 |
|
That is the hard way of doing it. Problem is some subvolumes are terabytes some are gigabytes, and it's a waste to put small ones on a whole tape, hence the combination. I'm developing a galaxy brain idea of something like tar -cvf /dev/st1 <(tar cvf /subvolume1) <(tar cvf /subvolume2) but I'm certain this will either laugh at me or create the world's worst mistake.
|
# ? Sep 28, 2022 14:07 |
|
How about using btrfs's integrated stream function? Or I suppose, if you are using tar, shouldn't it have lots of options to manipulate hardlink detection to avoid exactly such problems.
|
# ? Sep 28, 2022 14:16 |
|
You know what, you might be right - I might be overthinking this... I'll give it a go and see how it fares.
|
# ? Sep 28, 2022 14:22 |
Yeah, taking the standard I/O stream and optionally compressing that is much better than trying to work at the filesystem level. On the assumption that something equivalent to zfs corrective receive is coming to btrfs, it also gives you that option - it can fix data that's failed if the device still works but has only had minor errors without any mirroring or striped data with distributed parity to automatically heal itself.
|
|
# ? Sep 28, 2022 16:03 |
|
Ihmemies posted:Is there some way to make Ubuntu on Hyper V faster? I have a 12700K and RTX 3080, and Windows 11. Ubuntu is 22.04.1 LTS and kernel version is linux 5.15.0.1020-azure. Probably need to up the amount of vram, though a quick google search suggests that may not be possible in hyperv? Its been a while since I've had to mess with hyperv but that's usually the issue in kvm as desktop resolution gets higher, i thought hyperv had some vgpu settings but idk
|
# ? Sep 28, 2022 16:07 |
|
Tesseraction posted:Turns out btrfs (don't ask me I didn't pick it) reuses inums across subvolumes, so the archive assumes that same inum = hardlink, don't bother saving the file again.
|
# ? Sep 28, 2022 16:10 |
|
ExcessBLarg! posted:Yikes, I make this assumption pretty frequently. I don't use btrfs though. Yeah, I looked it up and there is discussion about fixing it in the Linux kernel, it's apparently a bigger issue over NFS. Even now their method to fix it apparently still allows collisions they're just "less likely." Luckily a grep on previous archives makes it easy to tell if it mistook files for hardlinks...
|
# ? Sep 28, 2022 16:13 |
|
Ihmemies posted:Is there some way to make Ubuntu on Hyper V faster? I have a 12700K and RTX 3080, and Windows 11. Ubuntu is 22.04.1 LTS and kernel version is linux 5.15.0.1020-azure. I dont have any hyperv experience but I know there is a hyperv_fb module that guests can use for display if the guest presents the right virtual hardware. I would assume moderm ubuntu provides the module so check on the hypervisor to see which virtual graphics device is being provided to this guest. And on the guest side you can look in lsmod and dmesg to see if it is loaded.
|
# ? Sep 28, 2022 16:33 |
|
|
# ? Jun 13, 2024 04:51 |
|
Tesseraction posted:Well I've discovered something lovely. I tried to perform a back-up of some data and found that the output is completely bollocksed. Turns out btrfs (don't ask me I didn't pick it) reuses inums across subvolumes, so the archive assumes that same inum = hardlink, don't bother saving the file again. What backup tool? It might have an option either for btrfs support, or for something like "don't skip hardlinks". Edit: hadn't refreshed. If tar is how you do it, you could try --hard-dereference. Man says "Follow hard links; archive and dump the files they refer to". Not super clear what this means or what it does without it, but it might do something. Phosphine fucked around with this message at 18:58 on Sep 28, 2022 |
# ? Sep 28, 2022 18:54 |