Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Scruff McGruff posted:

If all you need is video out then you can find old Radeon HD cards for like :10bux: all day long on ebay/marketplace that came out of workstations and servers for exactly this purpose.

Thanks, this seems like exactly what I need.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Kibner posted:

ARC A310.
This is a very good option if someone wants a way to add hardware-accelerated transcoding to a server - especially if you get a low-profile variant.

ummel
Jun 17, 2002

<3 Lowtax

Fun Shoe
After trial and erroring my way through setting up my first Windows Server box, I'm wondering if I set something up inefficiently.

My server is mainly for Plex right now, because that's what my old junk computer server did, but I'm planning to learn about VMs, etc later. I have ~10 drives set up in mirror storage space.

My first error was using thick provisioning, so I transferred all the data off of the drives and restarted the process.

My second error was using the server manager GUI to set up the mirror again. But I added all the drives at once so it mirrored + striped them. Great read/write speed, but then I realized it limited the size to the smallest drive in the array, so I transferred all the data off of the drives and restarted the process.

Everything was going well with my 1 column mirror array of a hodgepodge of old drives from the years. I think I got to 30 TB, then was trying to tie up loose ends with missing drivers, so I restarted the server and bam, chkdsk triggered on reboot. I don't know why, I didn't schedule it, afaik. 50+ hrs estimated to complete. It's been about 30 hrs and now it says 28 hrs left.

Should I have used ReFS instead of NTFS? Should I restart this whole process again with an ReFS format? When I was looking through it before, it seemed like it wasn't a good idea for home/personal use at this time due to bugs and compatibility. But now that I've circled around to it again, the posts I was reading were old or information from people that swore off ReFS in 2015 and haven't revisited it.

I've been transferring data for like 3 weeks and I'm burnt out on it tbh. But if this is something that would benefit me in the long term, I'll bite the bullet and restart with an ReFS format.

Heck Yes! Loam!
Nov 15, 2004

a rich, friable soil containing a relatively equal mixture of sand and silt and a somewhat smaller proportion of clay.
Are you using storage spaces?

ummel
Jun 17, 2002

<3 Lowtax

Fun Shoe

Heck Yes! Loam! posted:

Are you using storage spaces?

Yes, I'm using the server manager UI to manage the storage drives. I am not using tiered storage (bc it's just files and hosting them, no VMs or apps, etc run from the drives currently), or striping them (bc they're all different sizes), just 1:1 mirror of matched drives in a pool to have one root directory to work out of.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
Dockge looks to be an excellent tool for managing the various docker based selfhosted apps/services while using regular ol' Docker compose.

You get a pretty basic WebUI where you can deploy the stacks, edit existing compose.yml files, and start/stop/update/restart your services. You also get a web console that can run commands on the host or inside individual containers. You can see the progress of containers being pulled/started and view the logs of running containers.

Behind the scenes you specify a "stacks" folder, and Dockge creates a subfolder inside there for each service, where the associated compose.yaml file for it will live and it assumes you will place application configs and things.

What I like about this is just using Docker compose, it's not taking over anything or using it's own weird thing. You can create and deploy a container inside that stacks folder however you want and manage it with this, or edit a container this created using whatever program and tools you'd like.

Having used Unraid for a long time, I've gotten really used to having basic management of containers being just 1 or 2 clicks away, and found that very useful when dealing with basic self hosted services which are typically single instances running on singular hardware, and not dealing at all with swarms or scaling or everything enterprise docker related, and this gives me that.
I've tried Portainer and it's fine, but always felt over-engineered and clunky.

It's open source and from the same developer as uptime-kuma, with a similar UI to that.
https://www.youtube.com/watch?v=AWAlOQeNpgU

Well Played Mauer
Jun 1, 2003

We'll always have Cabo
Looks cool. I have portainer running across two different machines but would be interested in trying something a little less complicated. I’m also bad about using portainer to create the containers so my options to update them inside portainer can be limited anyway.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
My setup is just Gitlab CI/CD that runs ansible, which runs docker compose. I run docker compose locally via intellij while tinkering, then commit and it deploys it to my server

Generic Monk
Oct 31, 2011

fletcher posted:

My setup is just Gitlab CI/CD that runs ansible, which runs docker compose. I run docker compose locally via intellij while tinkering, then commit and it deploys it to my server

Is there a guide on how to set this up? I’ve wanted to implement something like this for a while but didn’t know where to start

cruft
Oct 25, 2007

I run podman out of runit

https://git.woozle.org/neale/stacks/src/branch/main/homelab

Nitrousoxide
May 30, 2011

do not buy a oneplus phone




I like your big-builder. I might tweak that and try using that some time as I'm using a very unnecessarily resource intensive gitlab instance currently for its ci/cd tools.

cruft
Oct 25, 2007

Nitrousoxide posted:

I like your big-builder. I might tweak that and try using that some time as I'm using a very unnecessarily resource intensive gitlab instance currently for its ci/cd tools.

Big Builder is so awesome, and I didn't even invent anything, I just read between the lines of the documentation (and some source code, because the docs weren't great).

Last night a job failed because I hadn't installed zip in Betty's Containerfile, LOL. So I rebuilt builder Betty. Like, once. First rebuild in about a month.

It's frankly baffling to me that this technique isn't in the official documentation.

For those in the home audience: big-builder is just, like, an image with all the stuff you need to build things, and a CI/CD runner. Then you set up your CI/CD job to check out the code and run make, or whatever. No need to spin up and build images on demand through docker or k8s, because the image already has everything you need.

It does check in with Gitea/Forgejo to get tasks, though, Nitrousoxide.

Hughlander
May 11, 2005

THF13 posted:

Dockge looks to be an excellent tool for managing the various docker based selfhosted apps/services while using regular ol' Docker compose.

You get a pretty basic WebUI where you can deploy the stacks, edit existing compose.yml files, and start/stop/update/restart your services. You also get a web console that can run commands on the host or inside individual containers. You can see the progress of containers being pulled/started and view the logs of running containers.

Behind the scenes you specify a "stacks" folder, and Dockge creates a subfolder inside there for each service, where the associated compose.yaml file for it will live and it assumes you will place application configs and things.

What I like about this is just using Docker compose, it's not taking over anything or using it's own weird thing. You can create and deploy a container inside that stacks folder however you want and manage it with this, or edit a container this created using whatever program and tools you'd like.

Having used Unraid for a long time, I've gotten really used to having basic management of containers being just 1 or 2 clicks away, and found that very useful when dealing with basic self hosted services which are typically single instances running on singular hardware, and not dealing at all with swarms or scaling or everything enterprise docker related, and this gives me that.
I've tried Portainer and it's fine, but always felt over-engineered and clunky.

It's open source and from the same developer as uptime-kuma, with a similar UI to that.
https://www.youtube.com/watch?v=AWAlOQeNpgU

That looks cool, does it support either fragments of compose files and/or .env files?

My workflow for any new docker is basically:
code:
mkdir NEWPROJECT
cd NEWPROJECT
ln -s ../.env .
cp ../grocy/docker-compose.yml .
vi docker-compose.yml
:1,$s/grocy/NEWPROJECT/g
:wq
docker volume create -d zfs NEWPROJECT_config
docker compose up -d
Where grocy/docker-compose.yml looks like:
code:
version: '2'
networks:
  jefferson_default:
    external: true
volumes:
  grocy_config:
    external: true
services:
  grocy:
    image: lscr.io/linuxserver/grocy
    container_name: grocy
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - grocy_config:/config
    restart: unless-stopped
    networks:
      - jefferson_default
    labels:
      traefik.enable: true
      traefik.http.routers.grocy.rule: "Host(`grocy.${DOMAIN}`)"
      traefik.http.routers.grocy.tls: true
      traefik.http.routers.grocy.middlewares: "secured-admin"
      traefik.http.routers.grocy.priority: 99
      traefik.http.routers.grocy2.rule: "Host(`grocy.${DOMAIN}`) && ${PRIVATE_IP}"
      traefik.http.routers.grocy2.tls: true
      traefik.http.routers.grocy2.middlewares: "secured-local"
      traefik.http.routers.grocy2.priority: 100
That's a lot of boiler plate for setting up traefik on a shared network, using a zfs volume for config, etc...

cruft
Oct 25, 2007

Hughlander posted:

That's a lot of boiler plate

Friend, have you heard about Kubernetes?

I think the difference here that your process is "go hack a couple files and kick it up with docker", which is cool. Docker Swarm and k8s are "you are running a massive installation and have 4 hours to to recover from a plane crashing into the data center".

For homelabs, your process is just fine, and arguably a better use of hobby time.

cruft fucked around with this message at 19:45 on Jan 8, 2024

Hughlander
May 11, 2005

cruft posted:

Friend, have you heard about Kubernetes?

I think the difference here that your process is "go hack a couple files and kick it up with docker", which is cool. Docker Swarm and k8s are "you are running a massive installation and need to be able to recover from a plane crashing into the data center in under 4 hours".

For homelabs, your process is just fine, and arguably a better use of hobby time.

? Weren't we basically talking about a more personal version of portainer though? Dockage hardly seems to be multi region k8s level unless I missed something in the video.

cruft
Oct 25, 2007

Hughlander posted:

? Weren't we basically talking about a more personal version of portainer though? Dockage hardly seems to be multi region k8s level unless I missed something in the video.

I'm latching on to the "boilerplate" part of what you said and going off on a thread-relevant tangent.

Motronic
Nov 6, 2009

I'm still confused as to what's wrong with the community/free addition of Portainer. It's all quire straightforward, in the standard repos and just works.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Motronic posted:

I'm still confused as to what's wrong with the community/free addition of Portainer. It's all quire straightforward, in the standard repos and just works.

I think the main draw of this tool is that it doesn't keep your compose files in unlabeled directories which you have to go through one by one to properly name if you ever want to leave the Portainer ecosystem. You just mount your existing compose file directory into that container, and it'll manage your services for you. If you decide to stop using it you just spin down the container and your compose files will still be where they are and will be the last version you had running.

I personally have dropped portainer for any sort of management. I do keep an instance running but only for its visualization tools for container resource usage. I do everything else via terminal or on my gitlab repo which I push via ssh to the server.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.

Hughlander posted:

That looks cool, does it support either fragments of compose files and/or .env files?
I'm not familiar with fragments. For .env files dockge supports them, and you can view and edit one per stack in its interface, but it doesn't do anything like support a single global env file that is automatically available to any new containers, or updated automatically across all your stacks when you update it.

Motronic posted:

I'm still confused as to what's wrong with the community/free addition of Portainer. It's all quire straightforward, in the standard repos and just works.
Portainer is fine, and I meant that. One example of where it felt clunky to me was updating a running container. The portainer process for this wasn't hard, but it was too many steps for something I wanted to do with a single click in a GUI. Portainer is stop the container, go into the container, click recreate, enable re-pull image, click recreate again.

As Nitrousoxide says, this will work alongside other solutions. So if you did have a (IMO) a more homelabby type solution like fletcher's "CI/CD that runs ansible and git auto-deploying to a server" this could work alongside that.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Nitrousoxide posted:

I personally have dropped portainer for any sort of management. I do keep an instance running but only for its visualization tools for container resource usage. I do everything else via terminal or on my gitlab repo which I push via ssh to the server.

Same, I've got maybe 15-20 containers spread between two physical Docker hosts, and another 10 deployed on K3s. Portainer's great to be able to view those all in a single pane of glass, but I have not moved any of my actual Compose management into Portainer for the sake of having full control.

I still haven't put the mental energy into understanding stacks or the whole v3 Compose spec.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Generic Monk posted:

Is there a guide on how to set this up? I’ve wanted to implement something like this for a while but didn’t know where to start

Depends on how far down the rabbit hole you want to go! If you want to go all the way, and pick up some marketable skills along the way:

1. Manually provision a VPC & EC2 instance in AWS to use as your test machine
2. Write some ansible code that will install docker, download your docker-compose.yml, and "docker compose up" on said EC2 instance
3. Replace your manually provisioned VPC & EC2 instance with infrastructure provisioned via terraform
4. Create a gitlab CI/CD pipeline that will run terraform to provision your test infrastructure, then runs ansible to configure the EC2 instance, then optionally destroys it at the end

There's a TON of devils in the details for how to handle secrets, AWS credentials, SSH keys, firewall rules (security groups), etc

I don't think I've seen a guide specifically for this setup, but there's guides for the different chunks:
* https://developer.hashicorp.com/terraform/tutorials/aws-get-started/aws-build
* https://www.digitalocean.com/community/tutorials/how-to-use-ansible-to-automate-initial-server-setup-on-ubuntu-22-04

You could potentially simplify it a bit by manually configuring your server so that it can run "git pull" to grab your latest code, then write an ansible task that will do the "git pull" and "docker compose up". Run the ansible task from your local machine as you go, to verify it can execute your ansible tasks on whatever machine is your "server". Then figure out how to write a CI/CD pipeline that can do the same thing you are running locally. From there you can layer on more and more complexity.

I've been meaning to write a blog post to go over my setup in more detail, I never seem to find the time though. Too much time consumed with more tinkering.

cruft
Oct 25, 2007

fletcher posted:

1. Manually provision a VPC & EC2 instance in AWS to use as your test machine
2. Write some ansible code that will install docker, download your docker-compose.yml, and "docker compose up" on said EC2 instance

If all you want is Docker, there are several operating systems that let you skip step 2. Some of them even apply their own updates, so you don't ever have to think about that.

Warbird
May 23, 2012

America's Favorite Dumbass

Dockge is apparently adding support for remote hosts and agents similar to how Portainer do, which pretty much removes the last reason I had to use the drat thing. I need to see if they have built in cron stuff. Portainer did iirc but you had to do some weird nonsense.

hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.

cruft posted:

If all you want is Docker, there are several operating systems that let you skip step 2. Some of them even apply their own updates, so you don't ever have to think about that.

I've just been using Ubuntu in a VM as my docker host, though I am looking at replacing it. Do you recommend any of those docker-orientated operating systems?

cruft
Oct 25, 2007

hogofwar posted:

I've just been using Ubuntu in a VM as my docker host, though I am looking at replacing it. Do you recommend any of those docker-orientated operating systems?

I'm a big fan of flatcar container linux, which is what I use everywhere*. It uses an A/B boot partition like ChromeOS.

Red Hat people may like Fedora CoreOS better. The whole SilverBlue thing confuses me and I haven't needed to sit down and figure it out yet, but Red Hat people might already be comfortable with it.

* My homelab is a Raspberry Pi and Flatcar is all "don't run this on an RPi if you care about the services", so there I use rootless alpine, which I like a lot.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



I'm a big ostree fan. It is what Silverblue/CoreOS/and, interestingly, flatpak, uses.

It basically works something like a base image for a podman or docker container over which changes to it are layered like commits to a git repo. You can revert those changes from any packages you install. Most of / is read only with the exception of /var and /etc mainly. /home is symlinked into /var. If you make changes to any of the settings files in /etc you can run a command to see what is different to revert them if need be.

There's a preference on how you should install stuff if you want to get the most out of it.
1: Flatpaks
2: Containers (either as purpose built containers or you can use distrobox to create traditional distro-like environments with access to your home)
3: Overlay using rpm-ostree.

You needn't be afraid to use overlays, but just keep in mind that updates take longer the more overlays you have. Though, since updates are prepared while you're still booted and you just boot into the prepared image instantly on your next boot, it should have no impact on boot speed.

Ostree is substantially more space efficient as an immutable system than an A/B one since it only needs to store the diffs from the current version and the previous (+ any other pinned deployments you set). There's barely any additional space required to keep a couple of known working deployments around to revert too if needed.

El Mero Mero
Oct 13, 2001

THF13 posted:


Portainer is fine, and I meant that. One example of where it felt clunky to me was updating a running container. The portainer process for this wasn't hard, but it was too many steps for something I wanted to do with a single click in a GUI. Portainer is stop the container, go into the container, click recreate, enable re-pull image, click recreate again.


Why not just run watchtower and automate it that process?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

hogofwar posted:

I've just been using Ubuntu in a VM as my docker host, though I am looking at replacing it. Do you recommend any of those docker-orientated operating systems?

I haven't looked into any of the docker-oriented operating systems, I just use Debian. I like the longevity of it and that it's probably a safe bet for use well into the future

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Barebones Ubuntu or Debian is a solid choice; it's what I use personally, and it's incredibly stable. If you're that concerned about overhead, maybe Alpine.

It's probably just that I have really bad luck, but every time I've tried to use a container-focused OS, it's been Google Reader-ed within a year.

CoreOS :rip:
RancherOS :rip:
K3os :rip:

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



CoreOS is still around.

And Flatcar is a fork of the original CoreOS from before it was snatched up by Fedora which is still being maintained

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Nitrousoxide posted:

CoreOS is still around.

Oh drat! That's great to hear. I just remember seeing the news a few years ago that the original had gone EOL and thought they were trying to absorb everything into Atomic Host.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
Does there exist a Nextcloud solution that replicates a 'Vault' such as Dropbox and OneDrive utilize? By this I mean a folder that when you go to access it requires a pin/passcode to view the contents, which are encrypted normally. I would like to store some sensitive documents (Epstein's list, for those curious) that would make it much more difficult to access in the rare event of an intrusion/hack.

Basic googling isn't helping, so appreciate any nudges in the right direction. Feels like a problem/feature that would have been incorporated somehow already either natively or through an add-on.

Thanks in advance!

cruft
Oct 25, 2007

Cenodoxus posted:

Oh drat! That's great to hear. I just remember seeing the news a few years ago that the original had gone EOL and thought they were trying to absorb everything into Atomic Host.

It's different. The name is the same, but the stuff beneath Fedora CoreOS--or, as my newest employee began calling it, "fecos" (like, fecal OS)--is very, very different.

Flatcar is the rightful heir of CoreOS. I think SuSE bought it? Maybe Microsoft?

---

If anybody is interested, I can write up what it's been like using Alpine as a container OS. The short version is: I like it a lot.

hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.

cruft posted:

I'm a big fan of flatcar container linux, which is what I use everywhere*. It uses an A/B boot partition like ChromeOS.

Red Hat people may like Fedora CoreOS better. The whole SilverBlue thing confuses me and I haven't needed to sit down and figure it out yet, but Red Hat people might already be comfortable with it.

* My homelab is a Raspberry Pi and Flatcar is all "don't run this on an RPi if you care about the services", so there I use rootless alpine, which I like a lot.

One thing I'm unsure of is that for my containers I currently use bind mounts to local folders in my docker vm. I would just have these nfs mounted but I have had issues with sqlite or similar over nfs. Would this be possible to do in these operating systems?

I know there's docker volumes, but it doesn't have the same ease of accessing the files (such as changing config). Though I'm happy to be proven wrong, as I'm not too familiar with them.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.

El Mero Mero posted:

Why not just run watchtower and automate it that process?
I'm not actually auto-updating any docker containers, just doing updates somewhat arbitrarily or when things break.
I try to use either the official developer or some bigger group like linuxserver if that's an option, but some container's are maintained by random internet people. How securely did "adolfintel" who maintains a speedtest container secure his dockerhub account, or "alexta69" who made a yt-dlp front end webui container?

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

cruft posted:

Flatcar is the rightful heir of CoreOS. I think SuSE bought it? Maybe Microsoft?

Flatcar was bought by Microsoft. I used it at my old job and I was very happy with it. Stable, minimal, excellent documentation, and it felt very well thought out for people running serious money on it.

For example, here are the docs on reboot strategies:

https://www.flatcar.org/docs/latest/setup/releases/update-strategies/

It's definitely aimed at production enterprise cloud usage, but I can't think of any reason it wouldn't make a good OS image for a hobbyist VM / container host. We ran it on a tiny scale, like a couple dozen VMs at peak load, and it never got in our way.

cruft
Oct 25, 2007

hogofwar posted:

One thing I'm unsure of is that for my containers I currently use bind mounts to local folders in my docker vm. I would just have these nfs mounted but I have had issues with sqlite or similar over nfs. Would this be possible to do in these operating systems?

I know there's docker volumes, but it doesn't have the same ease of accessing the files (such as changing config). Though I'm happy to be proven wrong, as I'm not too familiar with them.

I don't even understand what you're asking, heh. If you want to know if you can bind mount a volume in a container OS, the answer is yes. You can also NFS mount things, although database over NFS is a first class ticket to slowsville, as you discovered.

We even had an NFS server running flatcar. And HA MariaDB and Postgres database pools.

cruft
Oct 25, 2007

I actually came on to brag about the IRC server I'm running on my raspberry pi.

It felt risky doing this on a tiny PC with my 20Mbps uplink, until I remembered running a 1000+ user server on a T1 in 2001. The RPi outclasses that in every way, even with all my other crap running.

Warbird
May 23, 2012

America's Favorite Dumbass

Speaking of, when did it become kosher to run databases in containers anyway? When I first started out common wisdom was that it was a terrible idea but that seems to have changed from then. I’d presume so long as you’re volume mounting the data it would largely be fine.

Adbot
ADBOT LOVES YOU

cruft
Oct 25, 2007

Warbird posted:

Speaking of, when did it become kosher to run databases in containers anyway? When I first started out common wisdom was that it was a terrible idea but that seems to have changed from then. I’d presume so long as you’re volume mounting the data it would largely be fine.

I feel like that was the main thing. Databases want to do weird tricks with files, and adding filesystem abstraction is problematic.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply