Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Next week I'm planning on re-doing my home server that is stuffed full of 24 drives in ZFS pools. Here's what I'm thinking of doing, tell me if I'm dumb or not.

1. Ubuntu Server that doesn't do anything but run KVM.
2. A file server KVM-managed VM managing all my pools.
2. Another VM running Postgres and MySQL which I need for development and for some apps I run on the server.
3. Another VM that manages a half dozen docker-ified apps.

I'm not sure if its worth it, but it feels like a nice idea to not run anything on the host OS but KVM and then delegate everything I use to VMs. Maybe its a better idea to just run docker on the host OS?

Adbot
ADBOT LOVES YOU

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
On a smaller scale than you're doing (well, as far as storage capacity is concerned) I have 1 host with 3 guests hanging off of it.

The host is CentOS 7 and the guests are currently 2xCentOS and 1xFedora server.

My storage is a mirror ZFS pair that's attached to the host and shared out to the rest via samba and NFS.

The host OS (and subsequent qcow2 storage for guests) is all on a Samsung 850 Evo.

So all my system drives are on the SSD and all of my data is on the mirror ZFS pair of spinny's. I used the same ethos as you regarding the host: it's there to run ZFS and KVM for the guests and do nothing else. I have spare room on the SSD to fire up another VM if I want to add another system. I can use a qcow2 from as little as 10GB to whatever size and provision some data storage via NFS/samba from my ZFS pool. I've also got flexibility with the ZFS pool, in that I can create another dataset on it as long as it's not full and use that if I need to further segregate my data.

It's working OK so far. The main thing I'm doing differently from what you propose is that I have my ZFS storage attached to my host, not one of the VM's.

SamDabbers
May 26, 2003



Thermopyle posted:

1. Ubuntu Server that doesn't do anything but run KVM.
2. A file server KVM-managed VM managing all my pools.
2. Another VM running Postgres and MySQL which I need for development and for some apps I run on the server.
3. Another VM that manages a half dozen docker-ified apps.

I'm not sure if its worth it, but it feels like a nice idea to not run anything on the host OS but KVM and then delegate everything I use to VMs. Maybe its a better idea to just run docker on the host OS?

Host OS as a minimal hypervisor is not a bad idea at all. Are your service instances also going to be running a flavor of Linux? An alternative could be to use LXD on the host instead of full-fat KVM and run your services within those. You can even run Docker inside an LXD instance.

I've been running a similar setup with FreeBSD jails for years. My host OS has no services running except NTP and SSH, and everything else has its own jail, with appropriate portions of the filesystem bind/nullfs mounted in the right places, and network segmentation between jails is enforced by the host firewall. Performance is excellent, and I don't have to micromanage static resource assignments for each container, though I have the option.

SamDabbers fucked around with this message at 20:02 on Jul 7, 2017

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Further to the VM gaming related chat, how are people activating Windows on their virtual gaming platforma? Particularly 10, and particularly where you can pay for a license but be able to use it aver and over in case you need to tear down the VM and start again. Retail license?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

SamDabbers posted:

Host OS as a minimal hypervisor is not a bad idea at all. Are your service instances also going to be running a flavor of Linux? An alternative could be to use LXD on the host instead of full-fat KVM and run your services within those. You can even run Docker inside an LXD instance.

I've been running a similar setup with FreeBSD jails for years. My host OS has no services running except NTP and SSH, and everything else has its own jail, with appropriate portions of the filesystem bind/nullfs mounted in the right places, and network segmentation between jails is enforced by the host firewall. Performance is excellent, and I don't have to micromanage static resource assignments for each container, though I have the option.

Hmm, all guests are Linux except for one which is Windows XP(!). I guess I could do LXD for everything except the XP guest which I could leave on KVM.

SamDabbers
May 26, 2003



Thermopyle posted:

Hmm, all guests are Linux except for one which is Windows XP(!). I guess I could do LXD for everything except the XP guest which I could leave on KVM.

You can certainly run both LXD and KVM on the same host. LXD just has less overhead and more flexibility for Linux instances than running a separate kernel in KVM. Out of curiousity, what do you need an XP guest for in 2017?

apropos man posted:

Further to the VM gaming related chat, how are people activating Windows on their virtual gaming platforma? Particularly 10, and particularly where you can pay for a license but be able to use it aver and over in case you need to tear down the VM and start again. Retail license?

Windows 10 seems to tie activation to the system UUID, which is configurable. Just make it the same value on your new VM. The QEMU command line option is -uuid, and if you use a libvirt-based management tool, you can use virsh to change the VM's uuid in XML.

code:
$ sudo virsh dumpxml win10 > win10.xml
$ vi win10.xml
<domain type='kvm' id='1'>
  <name>win10</name>
  <uuid>7b4f842d-5fc0-4769-9000-08ba6ffaea7e</uuid>
...
$ sudo virsh undefine win10
$ sudo virsh define win10.xml
You can also keep the VM definition in libvirt and just create a fresh virtual disk if you simply want to wipe and reinstall.

SamDabbers fucked around with this message at 21:50 on Jul 7, 2017

other people
Jun 27, 2004
Associate Christ

SamDabbers posted:

You can certainly run both LXD and KVM on the same host. Out of curiousity, what do you need an XP guest for in 2017?


Windows 10 seems to tie activation to the system UUID, which is configurable. Just make it the same value on your new VM. The QEMU command line option is -uuid, and if you use a libvirt-based management tool, you can use virsh to change the VM's uuid in XML.

code:
$ sudo virsh dumpxml win10 > win10.xml
$ vi win10.xml
<domain type='kvm' id='1'>
  <name>win10</name>
  <uuid>7b4f842d-5fc0-4769-9000-08ba6ffaea7e</uuid>
...
$ sudo virsh undefine win10
$ sudo virsh define win10 < win10.xml
Of course, if you're just wiping and reinstalling, you can keep the VM definition in libvirt and recreate a fresh virtual disk.

maybe you did it that way to more easily Show Your Work but you don't have to undefine/redefine. just 'virsh edit' that thing.

SamDabbers
May 26, 2003



other people posted:

maybe you did it that way to more easily Show Your Work but you don't have to undefine/redefine. just 'virsh edit' that thing.

Actually, I had to undefine/redefine in order to change the UUID in particular. I got an error when I tried to change it using virsh edit.
code:
$ sudo virsh edit win10
error: operation failed: domain 'win10' already exists with uuid 7b4f842d-5fc0-4769-9000-08ba6ffaea7e
Failed. Try again? [y,n,i,f,?]: 

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

SamDabbers posted:

You can certainly run both LXD and KVM on the same host. LXD just has less overhead and more flexibility for Linux instances than running a separate kernel in KVM. Out of curiousity, what do you need an XP guest for in 2017?

A custom 16-bit line of business application developed in 1993 that my wife has to use for her work. The latest version of windows I can get it to run on in is XP.

other people
Jun 27, 2004
Associate Christ

SamDabbers posted:

Actually, I had to undefine/redefine in order to change the UUID in particular. I got an error when I tried to change it using virsh edit.
code:
$ sudo virsh edit win10
error: operation failed: domain 'win10' already exists with uuid 7b4f842d-5fc0-4769-9000-08ba6ffaea7e
Failed. Try again? [y,n,i,f,?]: 

ah well see maybe i am an idiot then

my phone doesn't have virsh so i can't test this right now

edit: the man pages and libvirt.org make no mention of not being able to change a domain's uuid but maybe there is some limitation of the xml validator for this....

other people fucked around with this message at 22:13 on Jul 7, 2017

Kassad
Nov 12, 2005

It's about time.

apropos man posted:

Further to the VM gaming related chat, how are people activating Windows on their virtual gaming platforma? Particularly 10, and particularly where you can pay for a license but be able to use it aver and over in case you need to tear down the VM and start again. Retail license?

You don't need to activate Windows 10 to install it, by the way. The only difference is that you can't use any personalization options and a persistent watermark reminding you to activate. I'm not using the VM passthrough method (yet?) but since last Christmas I've been dual booting with a Windows 10 ISO I grabbed from microsoft.com. I'm still getting updates, which is all I care about for a system I'm only using for games.

ToxicFrog
Apr 26, 2008


Thermopyle posted:

A custom 16-bit line of business application developed in 1993 that my wife has to use for her work. The latest version of windows I can get it to run on in is XP.

I wonder if it would run in dosbox, possibly inside win3.1.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

SamDabbers posted:

Actually, I had to undefine/redefine in order to change the UUID in particular. I got an error when I tried to change it using virsh edit.
code:
$ sudo virsh edit win10
error: operation failed: domain 'win10' already exists with uuid 7b4f842d-5fc0-4769-9000-08ba6ffaea7e
Failed. Try again? [y,n,i,f,?]: 

Yeah. That XML check thing was annoying the poo poo out of me the other day in Ubuntu until I managed to somehow get 'virsh edit' to ignore the template. A trick I then failed to replicate when I tore everything down and switched to Manjaro. The annoying thing is that sometimes the ignore key doesn't work. If you're going to give me the option gently caress this up, at least be consistent about it.

I'll try activating Windows and then keep the UUID safe for when I inevitably reinstall something.

evol262
Nov 30, 2010
#!/usr/bin/perl

SamDabbers posted:

Windows 10 seems to tie activation to the system UUID, which is configurable. Just make it the same value on your new VM. The QEMU command line option is -uuid, and if you use a libvirt-based management tool, you can use virsh to change the VM's uuid in XML.

Note that in the case of OEM systems, this may actually be tied in an annoying efivar instead of a uuid, but you can also extract that.

Thermopyle posted:

Next week I'm planning on re-doing my home server that is stuffed full of 24 drives in ZFS pools. Here's what I'm thinking of doing, tell me if I'm dumb or not.

1. Ubuntu Server that doesn't do anything but run KVM.
2. A file server KVM-managed VM managing all my pools.
2. Another VM running Postgres and MySQL which I need for development and for some apps I run on the server.
3. Another VM that manages a half dozen docker-ified apps.

I'm not sure if its worth it, but it feels like a nice idea to not run anything on the host OS but KVM and then delegate everything I use to VMs. Maybe its a better idea to just run docker on the host OS?

If it were me, I'd manage the file storage on the host, and shove postgres/mysql and the docker containers into a coreos host with persistent storage using kubernetes to manage it all.

I'd avoid LXD, not because it's bad, but because it's a Canonical project which is inevitably going to become less and less supported until it dies.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

evol262 posted:

If it were me, I'd manage the file storage on the host, and shove postgres/mysql and the docker containers into a coreos host with persistent storage using kubernetes to manage it all.

I'm barely familiar with docker and the extent of my knowledge about kubernetes is from reading the past 10 minutes, and I still can't figure out what coreos is. Their home page is remarkably unhelpful!

So, I'm guessing what youre saying here is that i leave VMs out of it and kubernetes runs my containers. I take it kubernetes can just manage containers on on the same host it is running on? I was originally under the impression that kubernetes would push containers to other machines (or VMs), but I really dont know what I'm talking about and I'm not sure where coreos comes in.

SamDabbers
May 26, 2003



evol262 posted:

I'd avoid LXD, not because it's bad, but because it's a Canonical project which is inevitably going to become less and less supported until it dies.

For a home server it's not really much of a risk though.

It's really too bad Linux container folk didn't implement kernel namespaces as a security boundary like FreeBSD's jails or Illumos' zones. The flexibility of being able to make a network namespace independently of a process namespace or a file system namespace is cool, but it complicates proper isolation to the point where a separate tool like LXD is necessary to lock everything down. Abstracting all the subsystems as a unit and baking the boundary into the implementation just makes everything simpler to manage and harder to gently caress up. For example, Docker containers wouldn't be nearly so easy to exploit.

Thermopyle posted:

I'm barely familiar with docker and the extent of my knowledge about kubernetes is from reading the past 10 minutes, and I still can't figure out what coreos is.

CoreOS aka Container Linux is just a stripped down distro for running Docker containers. He's saying run it in a KVM VM and your Docker apps in it. Kubernetes is a slick automation tool for deploying and managing apps that run in Docker containers.

SamDabbers fucked around with this message at 04:16 on Jul 8, 2017

Furism
Feb 21, 2006

Live long and headbang
How do you get notifications on ZFS errors/warning when you use a stock Linux OS?

evol262
Nov 30, 2010
#!/usr/bin/perl

SamDabbers posted:

It's really too bad Linux container folk didn't implement kernel namespaces as a security boundary like FreeBSD's jails or Illumos' zones. The flexibility of being able to make a network namespace independently of a process namespace or a file system namespace is cool, but it complicates proper isolation to the point where a separate tool like LXD is necessary to lock everything down. Abstracting all the subsystems as a unit and baking the boundary into the implementation just makes everything simpler to manage and harder to gently caress up. For example, Docker containers wouldn't be nearly so easy to exploit.
That's what the audit subsystem is for, and docker-selinux works well.

cgroups and namespacing were kind of an afterthought. The real problem with LXD is that using hardware virt to segment works, but it's yet another methodology (which makes sense given that Canonical has very few engineers) instead of really locking down the attack surface in the kernel.

Plus, Linux isn't nearly as unified as BSD or Solaris. I love it and it works, but it's 200 independent projects stuck together, not one. So Docker getting brand awareness (which is mostly what their valuation is based on) means that anything based on rkt, containerd, the OCI, or another solution barely gets a glance.

Thermopyle posted:

I'm barely familiar with docker and the extent of my knowledge about kubernetes is from reading the past 10 minutes, and I still can't figure out what coreos is. Their home page is remarkably unhelpful!

So, I'm guessing what youre saying here is that i leave VMs out of it and kubernetes runs my containers. I take it kubernetes can just manage containers on on the same host it is running on? I was originally under the impression that kubernetes would push containers to other machines (or VMs), but I really dont know what I'm talking about and I'm not sure where coreos comes in.

CoreOS is basically system+docker+cloud-init and nothing else, in an A/B update model instead of packages. They have a good "kubernetes for humans" setup.

Kubernetes itself is etcd, skydns, flannel, and a couple of other pieces to turn "run this dockerfile on one host and map ports" into something called "pods", which are basically discrete groups of container(s) which can spread across multiple hosts, automatically scale, run in groups (like docker-swarm, in principle), manages networking mapping and container visibility, storage mapping, etc. It's to containers as openstack, vcloud, AWS, or whatever is to VMs.

It's what most of the large container hosting (Azure, GCE, etc) runs on.

CoreOS's kubernetes setup is dead simple. Or Openshift on top of a CentOS VM, which can be done in a one-shot setup from ansible or in a container, and also handles Jenkins workflows for building your containers from source and deploying them, mapping visible DNS names (with a wildcard for the host)

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
If I want to spin iSCSI and SRP targets, what do I go for? LIO or SCST? What's better maintained? Or is there something else?

--edit: I suppose LIO since it's apparently in the kernel. Various pages on the web were fuzzy about it's state. I guess that's answered.

Combat Pretzel fucked around with this message at 23:39 on Jul 10, 2017

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Combat Pretzel posted:

If I want to spin iSCSI and SRP targets, what do I go for? LIO or SCST? What's better maintained? Or is there something else?

--edit: I suppose LIO since it's apparently in the kernel. Various pages on the web were fuzzy about it's state. I guess that's answered.
Speaking pragmatically, scst if you need to generate complete config files from a script without a ton of bullshit, LIO if you're comfortable using its wack rear end command line to do your administration because it performs a lot better

ToxicFrog
Apr 26, 2008


Furism posted:

How do you get notifications on ZFS errors/warning when you use a stock Linux OS?

Write a cron (or timer unit these days) that runs `zpool status -x | fgrep -v 'all pools are healthy'` set to mail on any output?

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
How much does a scrub put wear and tear on your drives?

Is once a fortnight too much or too little for `zpool scrub pool0`, where pool0 is just a mirror pair of WD Reds?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

apropos man posted:

How much does a scrub put wear and tear on your drives?

Is once a fortnight too much or too little for `zpool scrub pool0`, where pool0 is just a mirror pair of WD Reds?
ZFS scrubs aren't sequential. They will put some more wear and tear on your drive armatures than a traditional sequential RAID pass, but hard drives are actually designed to read and write bytes on disk pretty frequently. They'll be fine.

Scrub frequency depends a lot on your workload patterns. For example, if you do regular full backups, you don't ever need to scrub, because ZFS detects (and, when it can, automatically fixes) bitrot transparently when you read. I've never found any benefit to scrubbing more than once every few months, though. Two days sounds really extreme even for paranoid people.

hifi
Jul 25, 2012

There's a whole lot of well-meaning enterprise-grade or even just wrong information on the internet about zfs (e.g, the 1gb ram/tb of storage "rule of thumb"). I saw a recommendation of weekly scrubs and to be honest it's really not needed. You have SMART hard drive health warning you about physical issues with the drives, checksum parity, and regular raid disk parity, so two weeks is a really short time for something to go wrong with your setup. I do once a month and I think somewhere between that and yearly is what you should shoot for. I ran a regular mdadm array for a while before I felt linux zfs was far enough along and I never had an error that only a zfs scrub could fix.

Vulture Culture posted:

ZFS scrubs aren't sequential. They will put some more wear and tear on your drive armatures than a traditional sequential RAID pass, but hard drives are actually designed to read and write bytes on disk pretty frequently. They'll be fine.

Scrub frequency depends a lot on your workload patterns. For example, if you do regular full backups, you don't ever need to scrub, because ZFS detects (and, when it can, automatically fixes) bitrot transparently when you read. I've never found any benefit to scrubbing more than once every few months, though. Two days sounds really extreme even for paranoid people.

Agreed, but a fortnight is 2 weeks

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Thanks chaps. I've added a systemd timer to scrub on a Friday, somewhere in the middle of every month:

code:
Unit]
Description=initiate a zpool scrub pool0 on a Friday in the middle of every month.

[Timer]
OnCalendar=Fri *-*-13,14,15,16,17,18,19 02:00:00
Unit=scrub_zpool.service
Persistent=false

[Install]
WantedBy=timers.target
I also have timers set to do a long S.M.A.R.T. test on each drive in the mirror on the first Saturday and first Sunday of the month, respectively.
I've also got email reports via postfix/gmail for both the S.M.A.R.T. tests and the scrub.

I think this is probably more than enough for a home file server, so I can be confident that if anything untoward starts happening on the physical level or data integrity level I'm gonna get prior notice. It won't stop a drive from suddenly going kaput, of course, but between the two of them and scheduled backups I'll be alright.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

hifi posted:

Agreed, but a fortnight is 2 weeks
I don't understand what units are or why people use them. Specify milliseconds and let the system sort 'em out.

In all seriousness, I'm probably the exact kind of person that crashed that orbiter into Mars.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I just wish that they'd fix the formatting when specifying a range of dates. You're supposed to be able to use either this way:

OnCalendar=Fri *-*-13,14,15,16,17,18,19 02:00:00

Or this way:
OnCalendar=Fri *-*-13-19 02:00:00

But last time I tried (for about 4 hours), the short way didn't work. I can't remember if it was in Ubuntu or CentOS at the time. I've never filed a bug report before, so I just left it. I had a thorough crack at getting the shortened version working and there's definitely something wrong, either in the code or the systemd manpage.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin

hifi posted:

Agreed, but a fortnight is 2 weeks

You guys and your crazy metric system measurements of time.

Polygynous
Dec 13, 2006
welp

apropos man posted:

I just wish that they'd fix the formatting when specifying a range of dates. You're supposed to be able to use either this way:

OnCalendar=Fri *-*-13,14,15,16,17,18,19 02:00:00

Or this way:
OnCalendar=Fri *-*-13-19 02:00:00

But last time I tried (for about 4 hours), the short way didn't work. I can't remember if it was in Ubuntu or CentOS at the time. I've never filed a bug report before, so I just left it. I had a thorough crack at getting the shortened version working and there's definitely something wrong, either in the code or the systemd manpage.

The manpage here says: Each component may also contain a range of values separated by "..".

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Polygynous posted:

The manpage here says: Each component may also contain a range of values separated by "..".

gently caress. I posted from my mobile at work. That's what I meant. I'd been trying with the double period, as stated in the manpage. I'd just misremembered.

Polygynous
Dec 13, 2006
welp
oh. Then I have no idea. :v:

KingEup
Nov 18, 2004
I am a REAL ADDICT
(to threadshitting)


Please ask me for my google inspired wisdom on shit I know nothing about. Actually, you don't even have to ask.
Is anyone here with Linux driving a 4k display with an nvidia card?

I'm very interested in the support for nearest neighbour scaling as this would make gaming on a laptop with a 4k display far more attractive to me.

evol262
Nov 30, 2010
#!/usr/bin/perl

KingEup posted:

Is anyone here with Linux driving a 4k display with an nvidia card?

I'm very interested in the support for nearest neighbour scaling as this would make gaming on a laptop with a 4k display far more attractive to me.

Yes, though I basically don't play games, and I don't think DSR is implemented on Linux yet (any games I do play are done with a passthrough GPU anyway)

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
I wanted to set some firewall rules using firewalld to allow inbound SNMP traffic, but only from a /32. You can't just do sudo firewall-cmd --zone=public --add-port=161/tcp --add-source=10.200.204.200/32 --permanent, because you can't add both a source IP and a source port. So instead, I made a new zone, added the source IP and port to it, and then added the snmp service to it too. With iptables I would have to chain this into an ACCEPT table, so I started to look for how to do that, but then I noticed SNMP traffic was flowing. Am I good?

Furism
Feb 21, 2006

Live long and headbang

anthonypants posted:

eyou can't add both a source IP and a source port

Is this true? If so, why? Sounds really bad. Any firewall can do that!

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Furism posted:

Is this true? If so, why? Sounds really bad. Any firewall can do that!

quote:

$ sudo firewall-cmd --zone=public --add-port=161/tcp --add-source=1.2.3.4/32 --permanent
usage: see firewall-cmd man page
firewall-cmd: error: argument --add-source: not allowed with argument --add-port

Furism
Feb 21, 2006

Live long and headbang
Oh I believe you, I just want to understand why this limitation exists when it's a basic feature in any network firewall.

RFC2324
Jun 7, 2012

http 418


quote:

To allow incoming SSH connections from a specific IP address or subnet, specify the source. For example, if you want to allow the entire 15.15.15.0/24 subnet, run these commands:

sudo iptables -A INPUT -p tcp -s 15.15.15.0/24 --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT

The second command, which allows the outgoing traffic of established SSH connections, is only necessary if the OUTPUTpolicy is not set to ACCEPT.


https://www.digitalocean.com/community/tutorials/iptables-essentials-common-firewall-rules-and-commands

First Google hit.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
Yeah, I know how it works in iptables. I'm asking about firewalld though.

Adbot
ADBOT LOVES YOU

RFC2324
Jun 7, 2012

http 418

anthonypants posted:

Yeah, I know how it works in iptables. I'm asking about firewalld though.

Oops misread as iptables

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply