Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SurgicalOntologist
Jun 17, 2004

There are two weird things going on in my Fedora + Gnome system. First, when it goes to sleep it wakes up again a few seconds later. There are two monitors, so it's this cycle where one monitor sleeps then the other, then one wakes up with the login screen, then the lock screen moves to the middle of the two monitors and the other wakes up, then the lock screen moves back to just one of the monitors. It's super annoying when I'm meeting someone in my office and the monitors are just flashing on and off. However, it apparently does sleep eventually because when I come into the office in the morning it's not flashing anymore. I've never stuck around long enough to see how this happens though.

Second, Gnome gets really slow after it's been running for a few hours. Disabling all my extensions doesn't help, but restarting the shell fixes it. gnome-system-monitor shows 10-20% CPU for gnome-shell. Any ideas?

Adbot
ADBOT LOVES YOU

kujeger
Feb 19, 2004

OH YES HA HA

SurgicalOntologist posted:

There are two weird things going on in my Fedora + Gnome system. First, when it goes to sleep it wakes up again a few seconds later. There are two monitors, so it's this cycle where one monitor sleeps then the other, then one wakes up with the login screen, then the lock screen moves to the middle of the two monitors and the other wakes up, then the lock screen moves back to just one of the monitors. It's super annoying when I'm meeting someone in my office and the monitors are just flashing on and off. However, it apparently does sleep eventually because when I come into the office in the morning it's not flashing anymore. I've never stuck around long enough to see how this happens though.

I've had almost the same thing -- but only on one particular brand, if I used another monitor it worked fine. Switching from DVI to DP also fixed it on the same monitor, so you could try changing that.

YouTuber
Jul 31, 2004

by FactsAreUseless

SurgicalOntologist posted:

There are two weird things going on in my Fedora + Gnome system. First, when it goes to sleep it wakes up again a few seconds later. There are two monitors, so it's this cycle where one monitor sleeps then the other, then one wakes up with the login screen, then the lock screen moves to the middle of the two monitors and the other wakes up, then the lock screen moves back to just one of the monitors. It's super annoying when I'm meeting someone in my office and the monitors are just flashing on and off. However, it apparently does sleep eventually because when I come into the office in the morning it's not flashing anymore. I've never stuck around long enough to see how this happens though.

Second, Gnome gets really slow after it's been running for a few hours. Disabling all my extensions doesn't help, but restarting the shell fixes it. gnome-system-monitor shows 10-20% CPU for gnome-shell. Any ideas?

I get this under Arch so it's not distro specific. My best guess is that it's forgetting that a monitor exists and shifts it to the left or right since it's a single monitor setup now. Doing that makes it freak the gently caress out and unsuspend but when it's unsuspending it recognizes the prior monitor once again. I just disabled suspend since it's a Chromebox, it uses such little electricity and it's fanless as well so when I'm done I just turn the monitors off by hand.

I also get the other problem but only after a few days.

evol262
Nov 30, 2010
#!/usr/bin/perl
Does the monitor have its own power saving somewhere in the menu? I have a Samsung monitor which does this, and disabling the built-in power saving fixed it

Filthy Monkey
Jun 25, 2007

It has been years since I've done anything linux. Have a rasperry pi 3, and would like to be able to read/write to the pi home directory over samba. I've followed this guide.
http://www.daedtech.com/create-a-windows-share-on-your-raspberry-pi/

My smb.conf looks exactly like what is shown in the link.

I can see the share, and open files just fine. I don't seem to have permission to write anything though. I would like to fix that, as the main reason I am setting this up is so that I can edit the files on my local computer.

evol262
Nov 30, 2010
#!/usr/bin/perl
Your user IDs aren't mapped.

The easiest way to do this is to just use [homes]

GobiasIndustries
Dec 14, 2007

Lipstick Apathy
I've got two hard drives in my server with identical directory structures for media (2 2tb hard drives). I just purchased a 6tb drive and would like to consolidate everything onto one drive. The structure is below; there won't be any duplicates in the Movies directories, but some of the TV shows will have seasons on different drives. My plan was to cp everything from drive A to drive C, but how to I merge drive B's files neatly into drive C? There's a possibility that browsing the directories using my Macbook left hidden files around.

code:
/TV
 /Series A
  /Season 01
   Ep1.ext
   Ep2.ext
  /Season 02
  /etc

/Movies
 /Movie A Folder
  Movie A.ext
 /Movie B Folder
  Movie B.ext

evol262
Nov 30, 2010
#!/usr/bin/perl
rsync does this very easily
rsync -a /path/to/a /path/to/c

Repeat for b. I like "rsync -avH --progress --stats" if you want to see what it's doing (there are probably short flags for --progress and --stats if you care)

ToxicFrog
Apr 26, 2008


evol262 posted:

rsync does this very easily
rsync -a /path/to/a /path/to/c

Repeat for b. I like "rsync -avH --progress --stats" if you want to see what it's doing (there are probably short flags for --progress and --stats if you care)

rsync -aPhSHAX is my preferred rsync invocation:

-a: archive: recurse, preserve ownership, permissions, and timestamps, preserve symlinks, and preserve devices and special files
-SHAX: options that should be in -a but aren't: preserve sparse files, hardlinks, ACLs, and xattrs
-P: equivalent to --progress --partial; show progress information, keep partially transferred files if interrupted, and use those files to resume next time if possible.
-h: use human-readable units instead of blocks or bytes for progress

There's no short flag for --progress on its own or for --stats, though.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

rsync does this very easily
rsync -a /path/to/a /path/to/c

Repeat for b. I like "rsync -avH --progress --stats" if you want to see what it's doing (there are probably short flags for --progress and --stats if you care)
You want /path/to/a/ and /path/to/c/ with the trailing slashes unless you're a monster

fuf
Sep 12, 2004

haha
What's a good way to automatically reconnect to ssh sessions when I open my laptop and the wifi reconnects?

I'm sure I used to use autossh for exactly this, but it's not working for some reason - the connection just freezes and I have to close the terminal window completely (as opposed to regular ssh which just disconnects with the "broken pipe" message). Unless I'm missing an option somewhere?

This is on the new Ubuntu 16.04 with the default gnome terminal app.

kujeger
Feb 19, 2004

OH YES HA HA

fuf posted:

What's a good way to automatically reconnect to ssh sessions when I open my laptop and the wifi reconnects?

I'm sure I used to use autossh for exactly this, but it's not working for some reason - the connection just freezes and I have to close the terminal window completely (as opposed to regular ssh which just disconnects with the "broken pipe" message). Unless I'm missing an option somewhere?

This is on the new Ubuntu 16.04 with the default gnome terminal app.

try mosh

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

fuf posted:

What's a good way to automatically reconnect to ssh sessions when I open my laptop and the wifi reconnects?

I use mosh. It's wonderful, although ssh has more features.

fuf
Sep 12, 2004

haha
Thanks, mosh seems cool.

xtal
Jan 9, 2011

by Fluffdaddy
If you really liked Arch Linux and really hated Debian, what would you want to run on a fleet of ~300 servers? (Is it FreeBSD?)

I want a simple distribution with a simple package manager, but more stability than a rolling release.

RFC2324
Jun 7, 2012

http 418

xtal posted:

If you really liked Arch Linux and really hated Debian, what would you want to run on a fleet of ~300 servers? (Is it FreeBSD?)

I want a simple distribution with a simple package manager, but more stability than a rolling release.

CentOS/RHEL.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

RFC2324 posted:

CentOS/RHEL.

Seconded. You want 10 year support cycles and attention to security? You got it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

xtal posted:

If you really liked Arch Linux and really hated Debian, what would you want to run on a fleet of ~300 servers? (Is it FreeBSD?)

I want a simple distribution with a simple package manager, but more stability than a rolling release.
If you containerize your applications you're probably looking at Alpine and apk underneath whatever you decide to host on anyway

evol262
Nov 30, 2010
#!/usr/bin/perl

xtal posted:

If you really liked Arch Linux and really hated Debian, what would you want to run on a fleet of ~300 servers? (Is it FreeBSD?)

I want a simple distribution with a simple package manager, but more stability than a rolling release.

The question is "how are you going to manage ~300 servers?"

If you're gonna redeploy everything from containers or whatever (or new AMIs/glance/whatever), configure the hosts with cloud-init and/or some CM system, the answer is very different than it would be if you're managing ~300 "traditional" servers, where Arch/coreos/Alpine/etc would frankly be a nightmare

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Odd question:

At 5:30p yesterday, my cron job notification emails from one of my servers started coming in as 'tech@mydomain.com; on behalf of; Cron Daemon <root@mydomain.com>' instead of just from 'tech@mydomain.com'

It runs postfix, it's an Ubuntu 12.04 server. I didn't change anything yesterday at all. Probably wasn't even logged into that server. Emails from that server from other things (some php scripts that run etc) didn't change. Would Rackspace have changed something on their end (our email provider). The other weird thing is in the email headers there is one new line:

Sender: tech@mydomain.com

That line was never there in the earlier emails. Weird. Oddly enough 'last' told me wtmp didn't exist so I had to re-create that.

I only noticed because my Outlook rule wasn't putting all those in the 'Cron Jobs' folder, they were flooding my inbox all evening. Any idea what would cause this? I looked at all the files that changed in the last 24 hours and nothing looked odd.

IAmKale
Jun 7, 2007

やらないか

Fun Shoe
I :yotj:'d recently and I have to return my current work laptop. The hard drive inside it, though, is mine so I don't have to scramble too quickly to back up my Linux stuff.

How feasible is it to drop an existing Linux install into new hardware? My Windows experience tells me it's best to do a clean install of an OS when you change up hardware, is that a good rule for Linux too? I have / and /home on separate partitions so it wouldn't be a big deal to reinstall, I'm just worried about losing my PPAs and other customizations that don't reside in /home.

IAmKale fucked around with this message at 16:34 on May 4, 2016

covener
Jan 10, 2004

You know, for kids!

IAmKale posted:

I :yotj: recently and I have return my current work laptop. The hard drive inside it, though, is mine so I don't have to scramble too quickly to back up my Linux stuff.

How feasible is it to drop an existing Linux install into new hardware? My Windows experience tells me it's best to do a clean install of an OS when you change up hardware, is that a good rule for Linux too? I have / and /home on separate partitions so it wouldn't be a big deal to reinstall, I'm just worried about losing my PPAs and other customizations that don't reside in /home.

Bringing the old disk up on new HW has been pretty straightforward for me. Your network interfaces might get incremented/renamed which can be a little speedbump, but nothing too tricky. My (pre-systemd) cheatsheet has about whacking /etc/udev/rules.d/70-persistent-net.rules for example.

RFC2324
Jun 7, 2012

http 418

IAmKale posted:

I :yotj: recently and I have return my current work laptop. The hard drive inside it, though, is mine so I don't have to scramble too quickly to back up my Linux stuff.

How feasible is it to drop an existing Linux install into new hardware? My Windows experience tells me it's best to do a clean install of an OS when you change up hardware, is that a good rule for Linux too? I have / and /home on separate partitions so it wouldn't be a big deal to reinstall, I'm just worried about losing my PPAs and other customizations that don't reside in /home.

Linux is flexible on this. The only problems you are likely to see are device names changing, and possibly xwindows having an issue because of a video driver not working.

Lum
Aug 13, 2003

So I'm running Gentoo ~amd64 and want to fart about with wayland.

I've set the wayland useflag and am running the kde overlay with sddm as my login manager and plasma 5 as my main DE. I get offered Plasma (wayland) as a possible login session, but logging in to it just gets me a blank screen and then I have to restart sddm to get a login screen back. Using Weston does work, but is, well, kinda pointless.

The laptop is a Skylake system with nVidia optimus, but for now I'm just running the i915 driver and not even bothering with nVidia.

Any ideas where I should start looking?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

I only have a small deployment, but I really, really like it. Dogtag is nice if you need smartcard/yubikey/whatever support, but it's somewhat bloated and doesn't move very fast. I don't know any caveats for cfssl, but if I were starting a new deployment for a large environment, it'd be on a short list
Doubling back here:

Is it possible for a single cfssl API server to run several intermediate CAs (obviously, with their own certificates), or am I running several container instances?

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
So I leased a dedicated server for a few months to play around with. I want to set it up with nzbget (usenet downloader), a plex server, and openvpn for now. I have a few questions. I'm running the latest version of CentOS 7 by the way.

Is there a best practice partitioning guide for setting up a server? On my own boxes I've always just followed what the RHEL install guide suggests: a 500 MB /boot, 10 GB /, a /home partition, and a 4 GB swap partition, in that order. The RHEL guide also recommends xfs over ext4. I've always stuck with ext4 since I don't know any better. Should I bother futzing around with more partitions, logical or llvm partitions, RAID partitions? I consider all the data on this box expendable and I wouldn't lose any sleep over anything breaking.

A lot of security guides recommend or mention installing fail2ban to lock down ssh, but I noticed this wasn't in the official CentOS repositories but in the EPEL repository. Since CentOS/RHEL is used for servers I thought that was kind of odd because why wouldn't there be an "official" thing for ssh security? Is there a different tool that's recommended instead of fail2ban, or is this really the gold standard and there's some reason why it's not a part of the official repository?

Speaking of fail2ban, it looks like it wants to install firewalld. I've never really heard of firewalld before, but I have heard lots about iptables which I thought was the gold standard. Again back to the previous question, if fail2ban isn't part of the official repository in an OS used primarily for servers, and if it wants me to install a non-standard firewall, that seems odd. Basically should I be using firewalld or iptables?

Last question for now: It is my understanding that if I'm running a server, it's in my best interest to compartmentalize my different servers/processes (daemons?). This is for security but also resource allocation. A few pages ago I asked about docker and what it did for me as a simple user to which the answer is apparently nothing, but if I'm setting up a server then I guess I'd want to use docker to keep my nzbget, plex, and openvpn servers separated from each other right? Like I can have nzbget running in a docker container downloading files to say /home/user/downloads, and then have plex running in it's own docker container but with read access to /home/user/downloads, all the while I'd have openvpn server running in it's own container as well without access to /home/user/downloads and if someone where to break into my openvpn server they wouldn't be able to access anything else on the system. Is that the right idea?

Thanks in advance.

ExcessBLarg!
Sep 1, 2001

Boris Galerkin posted:

Is there a best practice partitioning guide for setting up a server?
The partitions you setup are fine. Honestly you could do one big / partition and be done with it. You need a separate /boot if you're using a root filesystem the bootloader can't read. A separate /home will allow you to update the OS independent of your personal files. Sometimes people like to separate /tmp and /var on multi-user servers so that if those partitions fill up with temporary and log files it doesn't hose the whole machine, but that's not really a concern on a single-user machine.

A 4 GB swap partition is kind of big, but otherwise fine. A lot of people run servers without any swap. Some swap space is useful so that the OS can page out unused-but-never-needed dirty pages and stuff like that. I personally never bother with more than 1 GB of swap.

ext4 is standard and fine for most tasks. xfs is better if you're regularly churning through a bunch of small files. At one time there was greater risk of data loss during crash and power events in xfs than ext3 and even ext4. I don't know if xfs ever adopted ext4's sync on rename/truncate semantics. Personally I'd stick with ext4 for / and only use xfs for a specific data partition, like a news spool (if you were running a news server), assuming you don't just go and use zfs.

If the machine doesn't already have hardware RAID, I would setup a RAID 1 mirror of everything to avoid downtime in the event of a disk failure. If you don't care about having to rebuild the machine in the event of a disk failure though, then it's not necessary. LVM is great for systems with many partitions, but see above on partitioning advice. Logical partitions are a hold-over from the MBR partitioning format, which doesn't support EFI or 2 TB+ disks. These days I'd use GPT+LVM or GPT alone, and you probably have to use GPT if it's an EFI machine.

Boris Galerkin posted:

A lot of security guides recommend or mention installing fail2ban to lock down ssh,
So long as you have strong passwords or, better yet, disable password authentication and only use public key--and you keep up with security updates--you don't need fail2ban. People do recommend that, along with running sshd on a high port, port knocking, and whatever else. There's nothing wrong with any of that but it's not really necessary either. Drive-by ssh attempts are a thing, but if you have strong credentials they're not getting into the system. Using fail2ban helps because it will cut down on the number of login attempts which keeps your log files cleaner, but at the cost of additional firewall accounting, and the risk of possibly locking yourself out. Honestly, running sshd on a higher port will also reduce the number of login attempts if that's your goal, and a pretty easy thing to do if it's a single-user machine.

The reality is that most RHEL servers don't use fail2ban and run sshd on port 22 with nothing fancy. So long as they have strong credentials they don't get hacked. They might a massive /var/log/btmp though from failed login attempts.

Boris Galerkin posted:

Basically should I be using firewalld or iptables?
firewalld is new, iptables is familiar. At this point you can use whichever you prefer. You don't have to setup a firewall, but you probably want to in order to restrict Plex to your VPN and not expose it to the public Internet.

Boris Galerkin posted:

It is my understanding that if I'm running a server, it's in my best interest to compartmentalize my different servers/processes (daemons?).
Compartmentalization has been all the rage since Docker/containers came out, and VMs generally before that. They're really good for a few things:
  • Shipping turn-key container images of software and their dependencies.
  • Maintaining different OS/library versions if you have the need to simultaneously run older and newer software.
  • Upgrading one component of a server without having to upgrade all the software at once.
  • Multiple administrative domains (if you have multiple administrators).
But, honestly, none of those are really your use case if you're just running a few daemons on a personal server. Security-wise it's a wash. Yes, if there's an exploit in Plex then it wouldn't affect your OpenVPN container directly, but the exploited-Plex would still have access to your VPN network, so the practical consequence might not be that different. On the downside, you have to be sure to be on-top of security updates for each container, not just the one system. Docker also adds an additional layer to the system, so there's always the risk that Docker/libcontainer/whatever itself has a vulnerability.

Boris Galerkin posted:

This is for security but also resource allocation.
Linux manages "resource allocation" just fine on its own.

Boris Galerkin posted:

Like I can have nzbget running in a docker container downloading files to say /home/user/downloads, and then have plex running in it's own docker container but with read access to /home/user/downloads, all the while I'd have openvpn server running in it's own container as well without access to /home/user/downloads and if someone where to break into my openvpn server they wouldn't be able to access anything else on the system. Is that the right idea?
You can already do this, without Docker, using standard user/group permissioning. Give the nzbget user write access to /home/user/downloads (or maybe just /home/nzbget), the plex user read access, and OpenVPN runs as nobody:nogroup by default.

ExcessBLarg! fucked around with this message at 16:38 on May 9, 2016

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
I already disabled both root and password logins so that's not an issue. I guess if fail2ban and such aren't a big deal then I won't bother with that.

ExcessBLarg! posted:

Compartmentalization has been all the rage since Docker/containers came out, and VMs generally before that. They're really good for a few things:
  • Shipping turn-key container images of software and their dependencies.
  • Maintaining different OS/library versions if you have the need to simultaneously run older and newer software.
  • Upgrading one component of a server without having to upgrade all the software at once.
  • Multiple administrative domains (if you have multiple administrators).
But, honestly, none of those are really your use case if you're just running a few daemons on a personal server. Security-wise it's a wash. Yes, if there's an exploit in Plex then it wouldn't affect your OpenVPN container directly, but the exploited-Plex would still have access to your VPN network, so the practical consequence might not be that different. On the downside, you have to be sure to be on-top of security updates for each container, not just the one system. Docker also adds an additional layer to the system, so there's always the risk that Docker/libcontainer/whatever itself has a vulnerability.

None of that matters to me. It's just I figured I might as well learn something new (docker) you know?

quote:

You can already do this, without Docker, using standard user/group permissioning. Give the nzbget user write access to /home/user/downloads (or maybe just /home/nzbget), the plex user read access, and OpenVPN runs as nobody:nogroup by default.

Huh, should I be installing/running all those services as a different user?

ExcessBLarg!
Sep 1, 2001

Boris Galerkin posted:

Huh, should I be installing/running all those services as a different user?
You should, but they may already be by default. I'm not familiar with nzbget and plex, but check "ps aux" to see what user they're running as, and if they're running as root you may be able to change that in their configuration files or in the init scripts.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

ExcessBLarg! posted:

You should, but they may already be by default. I'm not familiar with nzbget and plex, but check "ps aux" to see what user they're running as, and if they're running as root you may be able to change that in their configuration files or in the init scripts.

I haven't installed those services yet but thanks. I'll check it out later tonight. To be perfectly clear, I wouldn't/shouldn't have to manually do a useradd plex etc right? It'll just make the user when I set it up (assuming it makes its own user)?

RFC2324
Jun 7, 2012

http 418

Boris Galerkin posted:

I haven't installed those services yet but thanks. I'll check it out later tonight. To be perfectly clear, I wouldn't/shouldn't have to manually do a useradd plex etc right? It'll just make the user when I set it up (assuming it makes its own user)?

Check to see if it is running as its own user. If not, you will probably need to create an account to run it as.

hifi
Jul 25, 2012

firewalld is just a front end to iptables. i ran fail2ban for a while but it's more of a cosmetic thing to see ips getting banned vs rejected logins. the best thing you can do is disable password logins. also changing the ssh port away from the default will cut down on drive by attempts too.

Odette
Mar 19, 2011

hifi posted:

also changing the ssh port away from the default will cut down on drive by attempts too.

Definitely do this. I was getting north of 1k attempts/day on the standard SSH port before I switched to a different port.

jaeger
Jul 28, 2005
...
I understand samba 4 is pretty ok at being an active directory style authentication source these days, at least in some ways.

Would it be possible to set up a samba machine that auths clients like AD but backends using whatever PAM stack is on the system? In this case I'm asking because there's a setup where TACACS is the main stop for users and it would be neat to allow vcenter to auth against it through a samba server (since vcenter more or less wants to talk to AD).

Anyone done something like this? Setting up PAM to auth against TACACS isn't hard and I've set up samba to act as an AD client before, but not as a fake AD server with a random PAM backend.

YouTuber
Jul 31, 2004

by FactsAreUseless
I just found a better option for hosting my server. More ram, hd space, bandwith et al. I'm told the best way to transfer the server is via Rsync. What shouldn't be rsynced over? /dev and /proc?

YouTuber fucked around with this message at 03:12 on May 10, 2016

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
The content on your server is probably all held under a single directory, so just rsync that. You don't want to rsync the entire directory tree.

If you have a database or logs or whatever, it's probably held under /var somewhere. If so, you should dump the database to a file, rsync that file, then re-import it into the database on the new server.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

RFC2324 posted:

Check to see if it is running as its own user. If not, you will probably need to create an account to run it as.

If you (or anyone) was interested, the nzbget defaults to running the daemon as root but you can easily change that in the configuration file. I made a system user and a services file like this one from the Arch wiki and seems to work just fine. The only thing is that running the daemon opens up a built in webserver for the web client accessible by anyone and the authentication method seems to be a plain text password. Plus if you're in the web client then you can upload your own nzb files and I presume this could be easily abused. So as soon as I saw nzbget was working I shut it down overnight until I figured out how to deal with this.

I'm thinking of the following firewall setup:

- allow ssh from all incoming but with only public key authentication
- allow vpn from all incoming but with only public key authentication (that's a thing, right?)
- allow access to nzbget et al. only from the internal IP address

Does that sound about right for basic security?

Anyway while we're talking about iptables I was following this guide yesterday and had some questions about the following:

code:
# Setting default filter policy
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

[...]
# make sure nothing comes or goes out of this box
iptables -A INPUT -j DROP
iptables -A OUTPUT -j DROP
In the above, they already set the default policy of dropping all connections to the input, output, and forward chains in the first three lines. Why bother with the last two lines?

code:
# Allow incoming ssh only
iptables -A INPUT -p tcp -s 0/0 -d $SERVER_IP --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s $SERVER_IP -d 0/0 --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
For the INPUT rule, what's the purpose of the --sport (source port?) flag and its value 513:65535? Same goes for the --dport flag for the OUTPUT rule too I guess.

Also what's the point of the match/state flags? Is it just an explicit way to say "only accept this input if the connection to port 22 is new?" What does established mean in this case? If I only had the NEW state flag set then would iptables just drop all ssh commands from me other than the initial one to connect?

(Feel free to point me to a basic tutorial if I'm asking too many stupid questions. I've tried googling but unfortunately a lot of the websites and such I find just say "do this!" and doesn't really explain anything which doesn't help, and man iptables doesn't tell me anything about the states.)

ExcessBLarg!
Sep 1, 2001
That firewall script is overspecified and unnecessarily complex. You're right that the DROP commands are redundant, and specifying the source port of the incoming connection is pretty pointless. Honestly you can get away with something as simple as:
code:
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT

iptables -A INPUT -i lo -j ACCEPT 
iptables -A INPUT -i tun0 -j ACCEPT

iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

iptables -A INPUT -p icmp -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j ACCEPT # ssh
iptables -A INPUT -p udp --dport 1194 -j ACCEPT # OpenVPN
That default-denys everything on input and forward, but allows any outgoing connection. It then allows anything on the loopback interface (i.e., 127.0.0.1), anything over the VPN (assuming OpenVPN is configured to use the tun0 interface), all ICMP (e.g., ping), and all connections to the ssh and OpenVPN ports from anywhere.

The "-m state --state ESTABLISHED,RELATED" bit enables connection tracking and is necessary to allow incoming packets for connections that originate on your server, otherwise you never get any trafffic back. Although it looks cryptic, it's found pretty much in every default-deny iptables policy.

Xenomorph
Jun 13, 2001

Xenomorph posted:

Apparently I may have switched to 4.2/4.3 after it was broken - possibly by the very security update I was going for (Badlock).

It looks like the April 12th update (4.3.7 and 4.2.10) that fixed a ton of CVEs also broke winbind (and/or parts of how it does lookups).
For example, "wbinfo -g / getent group" works, "wbinfo -u / getent passwd" does not.

This command can also be used to test the failure:
code:
net ads search -P '(objectClass=user)' '*'
On 4.1, it returns the contents of our user database.
On 4.2/4.3, this will fail with this message:
code:
ads_do_paged_search_args: ldap_search_with_timeout((objectCategory=user)) -> Time limit exceeded
In doing searches, trying to figure out what I was doing wrong, I just keep finding more people with the same issue.

https://bugs.launchpad.net/ubuntu/+source/samba/+bug/1572876

https://bugs.launchpad.net/ubuntu/+source/samba/+bug/1573526

https://bugzilla.samba.org/show_bug.cgi?id=11872

Anyone following this (come on, it can't just be me here) should be relieved to know that this was fixed in Samba 4.2.12, 4.3.9, and 4.4.3 (May 2nd). (https://bugzilla.samba.org/show_bug.cgi?id=11872)

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

So, this morning my VMWare Ubuntu guest would autoresize the resolutoin when I resized the VMWare window.

Then this happened:

1. Booted the VM with a gparted live cd and resized some partitions.
2. Booted the VM back into Ubuntu.
3. Will no longer autoresize the resolution when I resize the VMWare window.
4. Will only go up to a resoltuoin of 1360x768...even if I press the VMWare fullscreen button. It definitely used to resize it to 1920x1200 at this point.

How do I get back the old behavior?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply