Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



Realtek giving people problems? Why I never!

Adbot
ADBOT LOVES YOU

CaptainSarcastic
Jul 6, 2013



Klyith posted:

Hmmm, this may mean that you have both pipewire and pulseaudio installed at the same time (bad). Or just that pipewire's pulse compatibility was loaded when you ran inxi, because you were playing audio from something that wanted pulse at the time (fine and normal).

Can you do pacman -Qs pulseaudio and paste that?


OTOH that is gonna be a minor problem compared to:

Hooooo kay, the actual audio component on the mobo is a Realtek ALC4080, which is brand new and giving people horrible problems everywhere. (even on windows!)

It's on the USB bus rather than PCIe, thus why it's labeled "Giga-Byte USB Audio". But it's not a standard UAC1/2 device, and still under active development to support each mobo because I guess each one can be slightly different? So you may be kinda hosed for a little while until support improves.


OTOH your Logitech 5.1 speakers are connected via analog so the rear panel audio apparently works fine. And you said in the other thread that the speakers have a headphone output that you can plug your headset into. Have you tried the rear panel mic input with the headset? If that works then the only major problem is the front panel -- due to no specific support for that mobo yet.

So what I'd do in that situation is get a extension audio cable so that you can plug the headset into the speakers and the back panel mic input at the same time.




Realtek: you touch the Crab, you get the Pinch

Just as an alternative, if you are connected to your monitor using HDMI or DP, and your monitor has a 3.5mm out, then you could set the default device to be the Nvidia card, and use that for audio. I've done that for years instead of mucking about with the motherboard audio.

LochNessMonster
Feb 3, 2005

I need about three fitty


Internet Explorer posted:

Is this due to having a VPN client on your work machine? I'm fairly clueless when it comes to this stuff, but that's what I noticed. VPN connection enabled, no network. VPN connection disabled, network works fine.

Most likely due to VPN but it's pretty weird that the networking works flawlessly on WSLv1 but not on v2. There's also a wsl-vpnkit, but I'm not going to run random software from github on my company issued machine. Workaround is spinning up an EC2 instance and use that as a dev machine. I can run that on demand 24/7 for a few years and it'll still be cheaper than trying to get WSL v2 working.

Blue Waffles
Mar 18, 2008

セイバー

Klyith posted:

Hooooo kay, the actual audio component on the mobo is a Realtek ALC4080, which is brand new and giving people horrible problems everywhere. (even on windows!)

It's on the USB bus rather than PCIe, thus why it's labeled "Giga-Byte USB Audio". But it's not a standard UAC1/2 device, and still under active development to support each mobo because I guess each one can be slightly different? So you may be kinda hosed for a little while until support improves.

Realtek: you touch the Crab, you get the Pinch

Welp that explains it, I have been wracking my brain trying various dubious things I found while googling in relation to issues others have had. Thank you for this, I really appreciate it. Guess I am going to have to check in on it now and then to see if it has been fixed.

In regard to the other question I can do that later on, this is a fresh install of manjaro so I havent really done any fuckery yet, so if those things are running at the same time I dunno why.

F_Shit_Fitzgerald
Feb 2, 2017



I do some volunteer work that involves docx and pptx files. At the end of the week, I'd like to purge all of these type of files from a directory using crontab. My commands
0 14 * * 0 rm *.docx ~/Documents/foo
1 14 * * 0 rm *.pptx ~/Documents/foo/bar

didn't work, and when I typed the same command in my terminal it complained that there were no .docx files and that Documents/foo is a directory.

Is there another set of commands that would work without deleting the entire directory?

Warbird
May 23, 2012

America's Favorite Dumbass

Did you do ‘crontab -e’ or ‘sudo crontab -e’ when making the jobs?

F_Shit_Fitzgerald
Feb 2, 2017



Warbird posted:

Did you do ‘crontab -e’ or ‘sudo crontab -e’ when making the jobs?

Yes. On my system it takes me straight to the nano file for editing.

Warbird
May 23, 2012

America's Favorite Dumbass

Which one? Sudo or non sudo? What is the crontab command you’re using?

Volguus
Mar 3, 2009

F_Shit_Fitzgerald posted:

I do some volunteer work that involves docx and pptx files. At the end of the week, I'd like to purge all of these type of files from a directory using crontab. My commands
0 14 * * 0 rm *.docx ~/Documents/foo
1 14 * * 0 rm *.pptx ~/Documents/foo/bar

didn't work, and when I typed the same command in my terminal it complained that there were no .docx files and that Documents/foo is a directory.

Is there another set of commands that would work without deleting the entire directory?

That's not how the rm command works. Not even in windows (had to look to make sure), the del command doesn't work like that. What you probably want is rm ~/Documents/foo/*.docx as a command line. As a crontab, I'd advise making a shell script:

code:
#!/bin/bash

rm -f /home/<user>/Documents/foo/*.docx
rm -f /home/<user>/Documents/foo/bar/*.pptx
That is: do not use ~ because that depends for what user is cron running the job, specify the full path. "*.docx" is expanded by the shell not by cron, so give it a shell. Add "-f" to force the removal, for "rm" to not stop to ask if you're really sure.

BlankSystemDaemon
Mar 13, 2009



It's the difference between per-user crontab and system crontab, which in turn controls which user the commands are run as (although the system crontab also lets you specify user:group).

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:
Edit. Sorry, wasn't caught up

Yaoi Gagarin
Feb 20, 2014

Seconding volguus idea, though if you really really don't want to make it a separate script then you can wrap your commands with bash like this:

bash -c 'rm /home/your name/foo/*.docx; rm /home/your name/foo/bar/*.pptx'

F_Shit_Fitzgerald
Feb 2, 2017



Warbird posted:

Which one? Sudo or non sudo? What is the crontab command you’re using?

crontab -e

Next time I'll use sudo crontab -e.


Volguus posted:

That's not how the rm command works. Not even in windows (had to look to make sure), the del command doesn't work like that. What you probably want is rm ~/Documents/foo/*.docx as a command line. As a crontab, I'd advise making a shell script:

code:
#!/bin/bash

rm -f /home/<user>/Documents/foo/*.docx
rm -f /home/<user>/Documents/foo/bar/*.pptx
That is: do not use ~ because that depends for what user is cron running the job, specify the full path. "*.docx" is expanded by the shell not by cron, so give it a shell. Add "-f" to force the removal, for "rm" to not stop to ask if you're really sure.

Oh. See, I thought it had to be structured that way for cron to be able to interpret it. A shell script would be ideal because I need to learn bash anyway.

Thanks for all the replies. There's so much I have yet to learn in Linux...

F_Shit_Fitzgerald fucked around with this message at 20:06 on Sep 25, 2022

Thanks Ants
May 21, 2004

#essereFerrari


Do you really want to delete those files on a schedule? What about moving them all to a folder named after the current date which then allows you to delete that folder manually or write another script to delete it a week later?

I'm just thinking of an edge case where your work takes longer than expected and runs over into the following week.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
I probably would use logrotate or another existing script to handle that sort of thing.

kujeger
Feb 19, 2004

OH YES HA HA

F_Shit_Fitzgerald posted:

crontab -e

Next time I'll use sudo crontab -e.

Just to elaborate, when you're using 'sudo', you are kind of changing to the 'root' user and then running the command. This is only necessary if you need to do things that your own user is not permitted to do.

When you run 'sudo crontab -e', you are then editing the crontab of the 'root' user. If you only want to change or alter your own files, you should probably not use sudo -- you already have all permissions on your own files.

When you run 'crontab -e', you are editing your own crontab, and commands run from it will run with your own permissions. For deleting your own files, this is almost certainly what you want.

BlankSystemDaemon
Mar 13, 2009



In most cases, it's a better idea to fix permissions so that you don't need to run a privileged command in order to do something.

Very very few things need to run as root.

Tesseraction
Apr 5, 2009

Accidentally running an rm -rf while located in root, for instance.


Something which I tried in a VM recently and was pleasantly surprised to see that if you try that these days it will error out and ask you to provide a flag like --no-really-this-is-not-an-accident before it will do it.

F_Shit_Fitzgerald
Feb 2, 2017



Thanks Ants posted:

Do you really want to delete those files on a schedule? What about moving them all to a folder named after the current date which then allows you to delete that folder manually or write another script to delete it a week later?

I'm just thinking of an edge case where your work takes longer than expected and runs over into the following week.

The nature of this work is that the docx and pptx files change week by week. Is there another package that could automate this other than cron?


kujeger posted:

Just to elaborate, when you're using 'sudo', you are kind of changing to the 'root' user and then running the command. This is only necessary if you need to do things that your own user is not permitted to do.

When you run 'sudo crontab -e', you are then editing the crontab of the 'root' user. If you only want to change or alter your own files, you should probably not use sudo -- you already have all permissions on your own files.

When you run 'crontab -e', you are editing your own crontab, and commands run from it will run with your own permissions. For deleting your own files, this is almost certainly what you want.

Ah, OK. I was dumb and didn't realize that different crontabs exist for the user and for the root.

Tesseraction
Apr 5, 2009

F_Shit_Fitzgerald posted:

The nature of this work is that the docx and pptx files change week by week. Is there another package that could automate this other than cron?

This seems like something that could be cron-ed

basically

tar -cvf /home/fitzgerald/Documents/backups/$(date +%F).tar /home/fitzgerald/Documents/foo && rm -fr /home/fitzgerald/Documents/foo/*

in a .sh file

With edits based on what else you need to be dealt with. And zip flags for bzip/gzip/xz and changing the tar extension as necessary.

Warbird
May 23, 2012

America's Favorite Dumbass

F_Shit_Fitzgerald posted:

Ah, OK. I was dumb and didn't realize that different crontabs exist for the user and for the root.

Sorry, yeah that’s what I was getting at. Since you were using the tilde home shortcut occam’s was that you were running as a different user than expected by way of the root cron. I’m still curious why it worked in your CLI interface as the command shouldn’t work when run in the way you posted it. This said I just assumed there was some sort of [pattern] [target] mode for rm that I wasn’t privy to because Linux.

avoid doorways
Jun 6, 2010

'twas brillig
Gun Saliva

Warbird posted:

Sorry, yeah that’s what I was getting at. Since you were using the tilde home shortcut occam’s was that you were running as a different user than expected by way of the root cron. I’m still curious why it worked in your CLI interface as the command shouldn’t work when run in the way you posted it. This said I just assumed there was some sort of [pattern] [target] mode for rm that I wasn’t privy to because Linux.

rm accepts multiple files

rm *.docx ~/Documents/foo

would delete all docx in the current directory, and then attempt to delete foo which it should complain is a directory.

Warbird
May 23, 2012

America's Favorite Dumbass

Well there ya go

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Work stuff: If you had a Linux kernel developer that was having to actually work on the kernel itself for your company, do you expect them to also be working on drivers and other applications? I would think having to mingle in the latest kernel code and balancing your own changes with it would be enough of a job for one person. But maybe that's not normal and these people are just kind of hitting the kernel as a side gig of their "linux kernel developer" job?

Volguus
Mar 3, 2009

Rocko Bonaparte posted:

Work stuff: If you had a Linux kernel developer that was having to actually work on the kernel itself for your company, do you expect them to also be working on drivers and other applications? I would think having to mingle in the latest kernel code and balancing your own changes with it would be enough of a job for one person. But maybe that's not normal and these people are just kind of hitting the kernel as a side gig of their "linux kernel developer" job?

There should be a PM, there should be a clear list of tasks that the person has to do and an estimate of those tasks by that person. Monday, I'm working on the kernel fixing bug 123. Tuesday, on application X, adding feature 456. etc. That also means that Tuesday there's no work being done on the kernel, and if bug 123 takes longer than 1 day then it just doesn't get done, until probably Wednesday.

At the end of the day whoever signs the cheques specifies what's important to them and what are their priorities, right?

ExcessBLarg!
Sep 1, 2001

Rocko Bonaparte posted:

Work stuff: If you had a Linux kernel developer that was having to actually work on the kernel itself for your company, do you expect them to also be working on drivers and other applications?
Is there enough kernel work to consume 40 hours a week on average? If yes, then that's probably enough.

What are the "drivers" if not in-tree?

As far as applications go, ancillary C user-space utilities that interface with the drivers absolutely make sense. Think anything that would belong in util-linux in a general-purpose system. But if "applications" are like Java GUI poo poo I wouldn't expect a kernel person necessarily to do that.

Rocko Bonaparte posted:

I would think having to mingle in the latest kernel code and balancing your own changes with it would be enough of a job for one person. But maybe that's not normal and these people are just kind of hitting the kernel as a side gig of their "linux kernel developer" job?
I think it depends. If you work for a peripheral manufacturer and are maintaining a driver for Linux that makes use of a popular subsystem such that you only really have to keep up with subsystem API changes and occasionally validate builds, that's probably not a 40 hour a week job. But if you're keeping on top of like, scheduler patches for your funny out-of-tree HPC cluster solution then yeah that's going to be a constant battle until you can get it in-tree, and even then you're probably expected to be the official maintainer of it.

Generally speaking, there's going to be many more folks of the former description for which actual kernel work is only 10-20% of their job responsibility.

BlankSystemDaemon
Mar 13, 2009



ExcessBLarg! posted:

What are the "drivers" if not in-tree?
Judging by all the SBCs that require you to use the vendors own distribution to work properly, that seems quite common to not have drivers in-tree.

BattleMaster
Aug 14, 2000

There are also userspace drivers made with uio_pci_generic or vfio-pci. Those are minimal drivers that, when assigned as the driver of a device, allow userspace code to directly access the device. But you don't need to be a kernel specialist to write those so I don't know how much that would apply.

Ihmemies
Oct 6, 2012

Is there some way to make Ubuntu on Hyper V faster? I have a 12700K and RTX 3080, and Windows 11. Ubuntu is 22.04.1 LTS and kernel version is linux 5.15.0.1020-azure.

Resolution is 50Hz at 3840x2160. I have no idea why it's only 50Hz or how to increase it. My monitor supports up to 120Hz but it would be nice if the inteface wasn't so slow.

Like if I max a firefox window it takes maybe a second or two while it slowly renders the window again.



It ran fine on Win10 + 2560x1440. I upgraded the OS and monitor and lag began.

Tesseraction
Apr 5, 2009

Well I've discovered something lovely. I tried to perform a back-up of some data and found that the output is completely bollocksed. Turns out btrfs (don't ask me I didn't pick it) reuses inums across subvolumes, so the archive assumes that same inum = hardlink, don't bother saving the file again.

Any ideas how to get past this?

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
The obvious thing to do sounds like making one backup archive per subvolume.

Tesseraction
Apr 5, 2009

That is the hard way of doing it. Problem is some subvolumes are terabytes some are gigabytes, and it's a waste to put small ones on a whole tape, hence the combination.

I'm developing a galaxy brain idea of something like

tar -cvf /dev/st1 <(tar cvf /subvolume1) <(tar cvf /subvolume2)

but I'm certain this will either laugh at me or create the world's worst mistake.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
How about using btrfs's integrated stream function?
Or I suppose, if you are using tar, shouldn't it have lots of options to manipulate hardlink detection to avoid exactly such problems.

Tesseraction
Apr 5, 2009

You know what, you might be right - I might be overthinking this... I'll give it a go and see how it fares.

BlankSystemDaemon
Mar 13, 2009



Yeah, taking the standard I/O stream and optionally compressing that is much better than trying to work at the filesystem level.

On the assumption that something equivalent to zfs corrective receive is coming to btrfs, it also gives you that option - it can fix data that's failed if the device still works but has only had minor errors without any mirroring or striped data with distributed parity to automatically heal itself.

Mr. Crow
May 22, 2008

Snap City mayor for life

Ihmemies posted:

Is there some way to make Ubuntu on Hyper V faster? I have a 12700K and RTX 3080, and Windows 11. Ubuntu is 22.04.1 LTS and kernel version is linux 5.15.0.1020-azure.

Resolution is 50Hz at 3840x2160. I have no idea why it's only 50Hz or how to increase it. My monitor supports up to 120Hz but it would be nice if the inteface wasn't so slow.

Like if I max a firefox window it takes maybe a second or two while it slowly renders the window again.



It ran fine on Win10 + 2560x1440. I upgraded the OS and monitor and lag began.

Probably need to up the amount of vram, though a quick google search suggests that may not be possible in hyperv? Its been a while since I've had to mess with hyperv but that's usually the issue in kvm as desktop resolution gets higher, i thought hyperv had some vgpu settings but idk

ExcessBLarg!
Sep 1, 2001

Tesseraction posted:

Turns out btrfs (don't ask me I didn't pick it) reuses inums across subvolumes, so the archive assumes that same inum = hardlink, don't bother saving the file again.
Yikes, I make this assumption pretty frequently. I don't use btrfs though.

Tesseraction
Apr 5, 2009

ExcessBLarg! posted:

Yikes, I make this assumption pretty frequently. I don't use btrfs though.

Yeah, I looked it up and there is discussion about fixing it in the Linux kernel, it's apparently a bigger issue over NFS. Even now their method to fix it apparently still allows collisions they're just "less likely."

Luckily a grep on previous archives makes it easy to tell if it mistook files for hardlinks...

other people
Jun 27, 2004
Associate Christ

Ihmemies posted:

Is there some way to make Ubuntu on Hyper V faster? I have a 12700K and RTX 3080, and Windows 11. Ubuntu is 22.04.1 LTS and kernel version is linux 5.15.0.1020-azure.

Resolution is 50Hz at 3840x2160. I have no idea why it's only 50Hz or how to increase it. My monitor supports up to 120Hz but it would be nice if the inteface wasn't so slow.

Like if I max a firefox window it takes maybe a second or two while it slowly renders the window again.



It ran fine on Win10 + 2560x1440. I upgraded the OS and monitor and lag began.

I dont have any hyperv experience but I know there is a hyperv_fb module that guests can use for display if the guest presents the right virtual hardware. I would assume moderm ubuntu provides the module so check on the hypervisor to see which virtual graphics device is being provided to this guest.

And on the guest side you can look in lsmod and dmesg to see if it is loaded.

Adbot
ADBOT LOVES YOU

Phosphine
May 30, 2011

WHY, JUDY?! WHY?!
🤰🐰🆚🥪🦊

Tesseraction posted:

Well I've discovered something lovely. I tried to perform a back-up of some data and found that the output is completely bollocksed. Turns out btrfs (don't ask me I didn't pick it) reuses inums across subvolumes, so the archive assumes that same inum = hardlink, don't bother saving the file again.

Any ideas how to get past this?

What backup tool? It might have an option either for btrfs support, or for something like "don't skip hardlinks".

Edit: hadn't refreshed. If tar is how you do it, you could try --hard-dereference.
Man says "Follow hard links; archive and dump the files they refer to". Not super clear what this means or what it does without it, but it might do something.

Phosphine fucked around with this message at 18:58 on Sep 28, 2022

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply