Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
emdash
Oct 19, 2003

and?

Bark! A Vagrant posted:

To keep this filesystem/disk chat rolling, I'm losing my goddam mind trying to figure out why df says I'm using almost twice what du says. I ran through the top search results like hunting down any deleted processes. Here's potentially relevant output (fairly sure it's not the cloud drive as there aren't enough files in there + the used seems suspiciously close to double what du says + 4gb of metadata I see from "btrfs device usage /"):



As an aside, I've been using this machine for ~3 years while learning linux so the chance I ran a really stupid command at some point is nonnegligible.


Apparently “btrfs fi du -s” can be used instead of plain du -s. Maybe see if that gives a more sensible picture

emdash fucked around with this message at 12:35 on Feb 3, 2024

Adbot
ADBOT LOVES YOU

digitalist
Nov 17, 2000

journey into Kirk's unknown


Have you tried something along the lines of lsof | grep deleted to see if any processes are holding on to deleted files?

Phosphine
May 30, 2011

WHY, JUDY?! WHY?!
🤰🐰🆚🥪🦊

BlankSystemDaemon posted:

You could also just have a games group that you're part of, and which has ownership of the games directory.

I used to have it set up like this, so me and my wife could share a drive for games, but in the end she just started using my login instead because actually having separate users didn't achieve anything except sometimes make stuff harder to do, so now it's just owned by me.

Klyith
Aug 3, 2007

GBS Pledge Week
If you look at the picture you'll see that Bark is using the btrfs commands for du and usage.


Btrfs du is adding up the space used by files, including what's shared between links and snapshots. Usage is accurately showing the free space tracked by the filesystem, via basic decrement / increment counter. The discrepancy happens because there's a 3rd category of used space: the wasted space produced when an extent is CoW'd. Measuring that space is hard -- the best tool for the job literally does it by random sampling the whole drive.

Here's what's happening. Let's do a simple copy on write filesystem, with a file:
code:
Pack my box with five dozen liquor jugs.
I modify the file, and this happens:
code:
Pack my box with five dozen liquor jugs.
            with six dozen
The second line is an extent that was copied during the write. Btrfs keeps track of the extents for each file (which is why btrfs and other CoW filesystems use a lot of metadata space). If I had a snapshot from before, the file in that snapshot would say five.

But this also has wasted some space because the copied extent was bigger than the modification.


You can recover the wasted space with btrfs filesystem defragment but defrag will unlink files in snapshots. (So if I defragged my data volume that has 200GB used out of 500GB space, with 4 old snapshots of the whole volume, it would run out of space. I'd need to delete the snapshots first.)


tl;dr: The estimated free space from usage is fairly accurate and reliable. Du is not useful as a whole-volume used space tool.

ArcticZombie
Sep 15, 2010

Bark! A Vagrant posted:

To keep this filesystem/disk chat rolling, I'm losing my goddam mind trying to figure out why df says I'm using almost twice what du says. I ran through the top search results like hunting down any deleted processes. Here's potentially relevant output (fairly sure it's not the cloud drive as there aren't enough files in there + the used seems suspiciously close to double what du says + 4gb of metadata I see from "btrfs device usage /"):



As an aside, I've been using this machine for ~3 years while learning linux so the chance I ran a really stupid command at some point is nonnegligible.

I can't help you but I'm absolutely dying at the Microsoft Community Forum level responses you've got so far.

digitalist
Nov 17, 2000

journey into Kirk's unknown


So you have nothing to contribute but you’re more than happy to poo poo on the people who are trying to be helpful?

Storm One
Jan 12, 2011

Klyith posted:

You can recover the wasted space with btrfs filesystem defragment but defrag will unlink files in snapshots.
Sorry for off-topic, but does anyone know if this is also the case with XFS and bcachefs?

I avoid using reflinks for dedup and stick to hardlinks exclusively because of this but I'm not sure if it's a Btrfs specific issue or something intrinsic to all reflink-capable filesystems.

Klyith
Aug 3, 2007

GBS Pledge Week
Hmmm, updates with more docs reading, it's more complicated than my example but there may also be a simpler method: Try a balance rather than defragment.

From the docs, balance will compact used data and free up space. That link has very comprehensible examples.


This is a situation I haven't hit myself, as my btrfs volume is storing fairly static data. So my usage ratio is very high:
code:
btrfs filesystem df .
Data, RAID1: total=216.00GiB, used=215.55GiB   <- only .5GB wasted space
System, RAID1: total=32.00MiB, used=48.00KiB
Metadata, RAID1: total=1.00GiB, used=268.22MiB
GlobalReserve, single: total=227.39MiB, used=0.00B

Storm One posted:

Sorry for off-topic, but does anyone know if this is also the case with XFS and bcachefs?

I avoid using reflinks for dedup and stick to hardlinks exclusively because of this but I'm not sure if it's a Btrfs specific issue or something intrinsic to all reflink-capable filesystems.

I'd think XFS would be completely different since it's not a CoW filesystem, but XFS you aren't supposed to need to defrag in the first place.

ZFS would be the FS that has the most similarity. ZFS also has fragmentation issues, and limited options to deal with it if it becomes a problem.

emdash
Oct 19, 2003

and?

Klyith posted:

If you look at the picture you'll see that Bark is using the btrfs commands for du and usage.


Whoops that’s what I get for looking at his post before caffeine

Storm One
Jan 12, 2011

Klyith posted:

I'd think XFS would be completely different since it's not a CoW filesystem

If I'm not mistaken, XFS has been data-only CoW for a few years now, ever since reflink support was added.

It's not true (metadata + data) CoW like ZFS etc but good enough for reflink dedupe, which I wouldn't mind using if only I could be assured that extent unsharing on defrag is not an issue like with Btrfs.

Bark! A Vagrant
Jan 4, 2007

Grad school is good for mental health
Well, trying to balance got spicy and it threw an error and the filesystem is now in read-only mode. Setting up a live-usb to book around + double check nothing is missing from my last backup, but at this point I'm probably just going to reformat + run some tests. Also looks like my wait for a 2 TB PCIe 4.0 SSD to go on sale might come to an end prematurely; thankfully they're not too crazy expensive as far as parts go.

Bark! A Vagrant fucked around with this message at 01:15 on Feb 4, 2024

Klyith
Aug 3, 2007

GBS Pledge Week

Bark! A Vagrant posted:

Well, trying to balance got spicy and it threw an error and the filesystem is now in read-only mode. Setting up a live-usb to book around + double check nothing is missing from my last backup, but at this point I'm probably just going to reformat + run some tests. Also looks like my wait for a 2 TB PCIe 4.0 SSD to go on sale might come to an end prematurely; thankfully they're not too crazy expensive as far as parts go.

Oof. Do you know what the error was? I think if you can boot enough to get a terminal in read-only mode, you can use journalctl or dmesg to see what it complains about.

There are definitely ways to recover from a failed balance, some of them are way easier than a reinstall depending on what is hosed up.


Also I'd check smartctl -a /dev/drive to see if your drive is full of internal errors -- that might make your decision about a new drive much easier. For a NVMe drive the thing you care about is "Media and Data Integrity Errors". SATA drives call it different things, generally involving the word "Uncorrectable".

Bark! A Vagrant
Jan 4, 2007

Grad school is good for mental health
I was able to mount and copy the partition to an external drive, and `smartctl -a /dev/drive` and `smartctl -t long /dev/drive` were both error free :shrug:

Plumbing deeper depths of the internet, it seems like the current recommended solution is to "simply" back up the partition, reformat the partition, and move it back with btrfs send/receive. This package looks promising based on the description, but I've lost enough time to loving around with trying to fix the current partition and am certainly not qualified to evaluate if it's safe to run regardless.

e: No data lost, though I used this as an excuse to switch to Tumbleweed. I've been using it on my laptop with sway, though I'm using sticking with Gnome+Paperwm on desktop. I decided to use separate partitions for home and root and only use btrfs on root this time around. Snapshots are too useful with a Nvidia GPU, some kernel update breaks things at least once every few months.

Bark! A Vagrant fucked around with this message at 03:12 on Feb 5, 2024

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
I am building a web cluster using Ubuntu 22.04 and Apache2 for the front end(s) and a TrueNAS NFS share on the backend for the file store.

I am trying to change ownership of the NFS share to allow the web servers to perform updates and file writes but I'm bumping into a permissions issue. So I'm running the following command

quote:

sudo chown www-data:www-data /nfs/web/files/

and I'm getting

quote:

chown: changing ownership of '/nfs/web/files': Invalid argument

I'm reading that the invalid argument issue is stemming from it being an NFS share mounted locally, but I'm not sure how to rectify it. Can someone help point me in the right direction?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
If you are trying to do that from the Ubuntu, root squash will prevent you from using your sudo-powers. Use 'id www-data' to check what are the UID and GID and change the ownership to those in the TrueNAS.

https://superuser.com/questions/1737302/root-squashing-for-nfs-and-smb-clarification

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Saukkis posted:

Use 'id www-data' to check what are the UID and GID and change the ownership to those in the TrueNAS.

...and how do I go about changing ownership in TrueNAS? I assume there's some kind of mapping function that'll assign a user or group on the Ubuntu boxes to a user on the TrueNAS box?

AlexDeGruven
Jun 29, 2007

Watch me pull my dongle out of this tiny box


Drop the / after files, that will help, I think. And if you want the user to be able to change any existing files, add a -R after chown.

Is nobody: nobody still the standard for NFS ownership?

spiritual bypass
Feb 19, 2008

Grimey Drawer
NFS ownership is a hassle because it expects the same user ID on client and server, not just names iirc

Computer viking
May 30, 2011
Now with less breakage.

Depends on if it's NFS3 (which sends numeric IDs over the network) or NFS4 (which sends names), too.

If this is NFS3, the easiest thing is to get a shell on the TrueNAS server and do it there - the shared folder will probably be in /mnt/zpool/something. I think you can use numeric IDs with chown, so check the uid and gid on the web servers (and verify that they are the same) and then chown -R 1001:1001 /mnt/zpool/webshare or whatever.

To get a shell on truenas, you can use the slightly crappy web shell in the web interface, or make yourself a user and enable ssh, or plug in a physical (or virtual, if you have that sort of out of band remote management) keyboard and monitor.

Computer viking
May 30, 2011
Now with less breakage.

Agrikk posted:

...and how do I go about changing ownership in TrueNAS? I assume there's some kind of mapping function that'll assign a user or group on the Ubuntu boxes to a user on the TrueNAS box?

To be explicit: No, not unless you use NFS4. The traditional way NFS works is that the client says "I am uid 1000 and I want to read this file" , and the server checks "can user 1000 on my side read this file". This works fine if 1000 is actually the same user on every machine, but to ensure that, you either need to be careful or use some sort of centralised login that assigns user ids.

Also, I'm sure you can see how this is a huge security hole if a machine not under your control can mount the share.

The "root squash" mentioned is a server rule that says "if a client claims to be uid 0 [root], quietly change that to this other uid" - so you do at least stop rogue clients from having root access to the files. That's why sudo chown doesn't work: The client sends "as uid 0, change these fields", the server changes that to "as uid [some user with no rights] ...", checks if that user is allowed to do so, and returns an "access denied" message.

Computer viking fucked around with this message at 02:22 on Feb 6, 2024

BlankSystemDaemon
Mar 13, 2009




YellowPages (now called NIS) was once upon a time the way to synchronize names and IDs for NFS, if you didn’t want a full Kerberos setup.
So far as I know, it still works on FreeBSD - because it wasn't that long ago I saw someone mention that they were using it.

If you’re using the FreeBSD based TrueNAS, there’s documentation.

EDIT: An alternative is to switch to NFSv4-only setup, but on Linux you don't get full NFSv4 ACLs, so that might or might not be an issue.

EDIT2: Another approach is to use LDAP as that's truly vendor-neutral.

AlexDeGruven posted:

Is nobody: nobody still the standard for NFS ownership?
That combination, or nobody:nogroup, is for anonymous NFS access - think WebNFS or read-only NFS shares.

I'm using it for accessing NFSv4-only on the WAN over a TLS encrypted share on FreeBSD, using just a single TCP port.

BlankSystemDaemon fucked around with this message at 09:23 on Feb 6, 2024

Computer viking
May 30, 2011
Now with less breakage.

I have to take another look at NFSv4 some day soon, that looks like it could be useful.

BlankSystemDaemon
Mar 13, 2009




Computer viking posted:

I have to take another look at NFSv4 some day soon, that looks like it could be useful.
The TLS option for NFS isn't even an RFC yet, although I think it's meant to be part of NFSv4.3 or v4.4?

Currently, it only works on FreeBSD, as Rick Macklem was the first to have a working implementation.

Computer viking
May 30, 2011
Now with less breakage.

I'd be using it between a FreeBSD file server and a FreeBSD calculation server, so that works for me.

mawarannahr
May 21, 2019

Computer viking posted:

I'd be using it between a FreeBSD file server and a FreeBSD calculation server, so that works for me.

What are you calculating ?

Computer viking
May 30, 2011
Now with less breakage.

mawarannahr posted:

What are you calculating ?

Personally not much, except for sporadically running tools for people, e.g pcgr.


The researchers and students do their heavy lifting in R and python on it, so things like Epigenetic alterations at distal enhancers are linked to proliferation in human breast cancer
or Dynamic changes in the T cell receptor repertoire during treatment with radiotherapy combined with an immune checkpoint inhibitor. Some of that was done on random workstations and even laptops, and some was done on the cranky old server we use because 256GB of RAM and the ability to set and forget a job that'll run quietly over the weekend is still very useful.

Also, apologies for my (lack of) citation formatting. :)

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Been thinking a bit about some GPU stuff. Here's the use case I'm going for: I'd like to have Fedora use an AMD iGPU for Wayland, then be able to use my 3080 for Proton games and a windows VM via passthrough.

I have an iGPU on my Ryzen 7700X that I turned off in the bios because it interferes with some Proton games starting properly. I've looked up a way to force Proton to use my 3080, which seems easy enough.

Two questions:

1. If I turn the iGPU back on, will Linux (I'm on Fedora 39) use it for desktop stuff, leaving the 3080 to just deal with games and other things I specify? My thinking is with the switch to Wayland in F40, the AMD iGPU will more than handle desktop duty and also not be a shitshow.

2. I have a couple work apps that don't work well enough in Linux to really use effectively. I can dual boot to windows but don't really want to. If I were to use a VM platform to create a Windows 10 VM and have the iGPU handling KDE's display, could I passthrough the 3080 to deal with my Photoshopping in the Windows VM without everything breaking? Obviously I couldn't/wouldn't run Proton with the VM open.

2.a. Is there a VM platform that's ideally free/cheap and handles GPU passthrough? Bonus points if there's a community for it so I don't need to poo poo up this thread with esoteric questions.

Klyith
Aug 3, 2007

GBS Pledge Week

Well Played Mauer posted:

2.a. Is there a VM platform that's ideally free/cheap and handles GPU passthrough? Bonus points if there's a community for it so I don't need to poo poo up this thread with esoteric questions.

Free VM platforms / software: sure. QEMU/KVM is the main VM engine & is open source.

You probably want a graphical frontend / manager for that (though it's not required, you can do everything in terminal if you're hardcore).

Virt-manager is the basic standby, it's pretty approachable. I figured out how to run my windows install via drive passthrough pretty quick (but this is substantially easier than gpu passthrough). The one downside is that virt-manager doesn't have all the options for a good gaming VM in the normal setup, so you do need some hand editing of the XML files that define the VM.

And you want a way to see what's happening on the VM -- which could be a wire to your monitor & switching inputs, or could be software like Looking Glass that just grabs the video buffer from the 3080 and blaps it to the host GPU.


Community: the level1techs forum is a hotbed of talk for Looking Glass. A lot of them are partial to Proxmox rather than virt-manager but that seems super complicated.



One of the things that seemed like a general downside to all of this stuff is that I think it's difficult to switch the 3080 between linux using it and the VM using it, on the fly. So using the 3080 both for gaming in linux via proton and a windows VM might be difficult. (But I could be totally wrong about that, most of my info was research from 2 years ago and I never ended up doing any of it.)

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Can you do GPU passthrough with consumer NVIDIA cards now? I thought that was all locked to their datacentre cards.

waffle iron
Jan 16, 2004

Subjunctive posted:

Can you do GPU passthrough with consumer NVIDIA cards now? I thought that was all locked to their datacentre cards.

Pretty much all modern AMD and Intel platforms support IOMMU so that a single VM can connect to a PCIe device. However if you want multiple the base OS and VMs to access the 1 PCIe device at the same time, that won't work. NVIDIA's solution on enterprise cards is to present as many virtual GPUs (vGPU) so that each VM thinks it's getting a full card that it has exclusive control of.

Edit: The gist of how IOMMU works is that it virtually reparents the PCIe device you select at the root of the PCIe hierarchy. Then you blocklist Linux from bringing up the device with the kernel driver. After that you can pass the PCIe device to a single VM at a time.

waffle iron fucked around with this message at 17:22 on Feb 10, 2024

zhar
May 3, 2019

Well Played Mauer posted:

1. If I turn the iGPU back on, will Linux (I'm on Fedora 39) use it for desktop stuff, leaving the 3080 to just deal with games and other things I specify? My thinking is with the switch to Wayland in F40, the AMD iGPU will more than handle desktop duty and also not be a shitshow.

maybe nvidia optimus or something could handle it (no doubt messily) but otherwise not sure how the driver situation would work out. I'd be interested to hear if it does work.

mystes
May 31, 2006

Subjunctive posted:

Can you do GPU passthrough with consumer NVIDIA cards now? I thought that was all locked to their datacentre cards.
I think you've always just had to do some trivial tweak in the hypervisor to get the driver to install. I've been doing gpu passthrough with a consumer nvidia card for years.


And yeah in this situation unless there's some magic way to use the nvidia card for acceleration of specific programs with PRIME (is that even supported on desktops?), you're probably better just permanently dedicating the discrete gpu to the vm and running all the games in it

If you didn't need acceleration for proton I would suggest just not having a gpu for the vm though.

mystes fucked around with this message at 17:29 on Feb 10, 2024

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

waffle iron posted:

Pretty much all modern AMD and Intel platforms support IOMMU so that a single VM can connect to a PCIe device. However if you want multiple the base OS and VMs to access the 1 PCIe device at the same time, that won't work. NVIDIA's solution on enterprise cards is to present as many virtual GPUs (vGPU) so that each VM thinks it's getting a full card that it has exclusive control of.

Edit: The gist of how IOMMU works is that it virtually reparents the PCIe device you select at the root of the PCIe hierarchy. Then you blocklist Linux from bringing up the device with the kernel driver. After that you can pass the PCIe device to a single VM at a time.

Ah, nice. Does it work to give a GPU to a VM, then shut that down and use the GPU on the host, then shut that down and give it to another VM, etc?

mystes
May 31, 2006

Subjunctive posted:

Ah, nice. Does it work to give a GPU to a VM, then shut that down and use the GPU on the host, then shut that down and give it to another VM, etc?
You can pass through and un pass through devices as you start and shut down vms but it would probably be a pita in terms of changing the configuration for x11 or wayland, restarting x11/wayland, ensuring the linux drivers aren't using the device when you're trying to pass it through, etc. each time

mystes
May 31, 2006

Oh also I assume it's probably not an issue with more recent motherboards but just be aware if you've never done gpu passthrough on a given computer before and you have an older motherboard, you can theoretically have problems with your motherboard being stupid in terms of iommu groups in a way that makes it impossible to pass through devices

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

mystes posted:

You can pass through and un pass through devices as you start and shut down vms but it would probably be a pita in terms of changing the configuration for x11 or wayland, restarting x11/wayland, ensuring the linux drivers aren't using the device when you're trying to pass it through, etc. each time

Yeah I was thinking “start a second X/wayland display on the dGPU for Blender” rather than using it to drive the main display, but you raise a good point.

mystes
May 31, 2006

Subjunctive posted:

Yeah I was thinking “start a second X/wayland display on the dGPU for Blender” rather than using it to drive the main display, but you raise a good point.
With a second display it would be simpler (using a second display also makes using gpu passthrough in a vm simpler)

In practice, once you're using gpu passthrough with a windows vm with the better gpu it's probably easier to just run all the gpu-intensive stuff in windows though.
----

My current situation is that I started using gpu passthrough with windows maybe 5 years ago, and I had one (worse) nvidia card for the host and one (slightly better) nvidia card for the vm.

However, after recently switching to wayland and getting 4k monitor I got a slightly more recent radeon card for the host, and proton is extremely good now, so I don't really feel like I have any stuff in windows that needs gpu passthrough now.

The one remaining problem is that it seems like I can't get any rdp clients to do 4k resolution because of xwayland scaling stupidity, so I'm still using lookingglass because I already have that set up and that works to get 4k resolution from windows, but once I can get an rdp client working adequately I'll probably stop doing gpu passthrough.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
I have virt manager and qemu working to do passthrough for the dgpu in my work laptop. It works well enough, though it is rather janky as my computer freezes if I ever shut down the vm.

mystes
May 31, 2006

Watermelon Daiquiri posted:

I have virt manager and qemu working to do passthrough for the dgpu in my work laptop. It works well enough, though it is rather janky as my computer freezes if I ever shut down the vm.
out of curious is it nvidia or amd and if it's nvidia does it have its own video output? I tried to do gpu passthrough on a laptop before but it had an nvidia gpu, the dgpu didn't have its own output, and I think it was a muxless configuration with the integrated graphics connected to the display and hdmi, and at the time it didn't seem like there was any way to convince the nvidia drivers to run in a windows vm in that configuration

Adbot
ADBOT LOVES YOU

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well Played Mauer posted:

Been thinking a bit about some GPU stuff. Here's the use case I'm going for: I'd like to have Fedora use an AMD iGPU for Wayland, then be able to use my 3080 for Proton games and a windows VM via passthrough.

I have an iGPU on my Ryzen 7700X that I turned off in the bios because it interferes with some Proton games starting properly. I've looked up a way to force Proton to use my 3080, which seems easy enough.

Two questions:

1. If I turn the iGPU back on, will Linux (I'm on Fedora 39) use it for desktop stuff, leaving the 3080 to just deal with games and other things I specify? My thinking is with the switch to Wayland in F40, the AMD iGPU will more than handle desktop duty and also not be a shitshow.

2. I have a couple work apps that don't work well enough in Linux to really use effectively. I can dual boot to windows but don't really want to. If I were to use a VM platform to create a Windows 10 VM and have the iGPU handling KDE's display, could I passthrough the 3080 to deal with my Photoshopping in the Windows VM without everything breaking? Obviously I couldn't/wouldn't run Proton with the VM open.

2.a. Is there a VM platform that's ideally free/cheap and handles GPU passthrough? Bonus points if there's a community for it so I don't need to poo poo up this thread with esoteric questions.

Have you considered Wolf?

https://games-on-whales.github.io/wolf/stable/index.html

Their discord has a working (I think, ive not tested it) podlet file for (rootful) podman



code:

[Unit]
Description=Podman Wolf Gamestreaming
Requires=network-online.target mnt-storage.mount mnt-faststorage.mount

[Service]
TimeoutStartSec=900
ExecStartPre=-/usr/bin/mkdir /tmp/sockets
ExecStartPre=-/usr/bin/podman rm --force WolfPulseAudio
Restart=on-failure
RestartSec=5
StartLimitBurst=5

[Container]
AutoUpdate=registry
ContainerName=%N
HostName=%N
Image=ghcr.io/games-on-whales/wolf:alpha
AddCapability=CAP_SYS_PTRACE
AddCapability=CAP_NET_ADMIN
Network=host
SecurityLabelDisable=true
PodmanArgs=-ipc=host --device-cgroup-rule "c 13:* rmw"
AddDevice=/dev/dri
AddDevice=/dev/uinput
Environment=WOLF_STOP_CONTAINER_ON_EXIT=true
Environment=WOLF_LOG_LEVEL=INFO
Environment=WOLF_APPS_STATE_FOLDER=/mnt/faststorage/wolf
Environment=GST_DEBUG=2
Volume=/dev/input:/dev/input:ro
Volume=/run/udev:/run/udev:ro
Volume=/mnt/storage/Config/wolf/cfg:/wolf/cfg
Volume=/mnt/storage/Config/wolf/apps:/wolf/apps
Volume=/tmp/sockets:/tmp/sockets:rw
Volume=/run/podman/podman.sock:/var/run/docker.sock:ro

[Install]
WantedBy=multi-user.target

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply