Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Computer viking
May 30, 2011
Now with less breakage.

The mount.cifs command has some switch to specify the username, and if you don't, it defaults to the user that is running the command - which is probably why that failed.

You can also connect "manually" in dolphin (or any KDE application) by typing in the address - it looks like an URL, but you can replace "http" with a number of other services. For Windows file shares, try smb://user@server.ip , optionally with /sharename.

If you can ssh to a computer, you can also use fish:// or sftp:// or scp:// - I never quite know which of those will work best, but at least one of them will let you browse remote files over ssh.

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

Computer viking posted:

You can also connect "manually" in dolphin (or any KDE application) by typing in the address - it looks like an URL, but you can replace "http" with a number of other services.

It is a URL. The U stands for 'universal'. Http is a type of url which is why you specify Http instead of it being assumed.

LochNessMonster
Feb 3, 2005

I need about three fitty


Running into a weird issue after upgrading from Fedora 29 to 30. After upgrading the login screen didn't appear when booting and when pressing ctrl+\ I saw that the boot process got stuck after
code:
[  OK  ] Started Permit User Sessions.
[  OK  ] Started Command Scheduler.
         Starting Hold until boot process finishes up...
[  OK  ] Started Deferred execution scheduler.
         Starting Light Display Manager...
         Starting Hostname Service...
[  OK  ] Started OpenSSH server daemon.
[  OK  ] Started Hostname Service.
         Starting Network Manager Script Dispatcher Service...
[  OK  ] Started Network Manager Script Dispatcher Service.
.

The good news is the OpenSSH server is started so I can ssh into it and see how to fix it from there. I'm not sure where to look though, boot.log doesn't show any info other than that it stopped after the Network Mangager Script dispatcher service. And neither does /var/log/messages or the Xorg logs.

After upgrading from 28 to 29 I also had an issue with xfce not starting which was resolved by systemctl set-default graphical.target. That didn't resolve my issues now though. Not sure where to look next, anyone have an idea?

LODGE NORTH
Jul 30, 2007

So, getting two Pis for two different projects and successfully messing with both of them has made me want to learn Linux.

As far as learning goes, I know I could install Linux on my computer itself and just go that way, but are there benefits to having something like a Zero W solely for learning, utilizing, things like that?

tjones
May 13, 2005

LODGE NORTH posted:

So, getting two Pis for two different projects and successfully messing with both of them has made me want to learn Linux.

As far as learning goes, I know I could install Linux on my computer itself and just go that way, but are there benefits to having something like a Zero W solely for learning, utilizing, things like that?

I always recommend starting with a VM to slowly tackle projects/problems on it so that if anything goes south all you need to do is revert to a prior save state. You can figure out which distro you like, what types of packages you'll want and need, and the general howto of moving around and configuring it all. If you decide its not for you or you lose interest you're no worse for wear and can easily delete the VM.

After some time doing this, you'll have a good idea of what you want and whether you should setup a dual boot or separate machine proper.

However if you're actively working with Pi's for your job or projects, then I don't see the reason why another personal setup would be a bad idea.

spiritual bypass
Feb 19, 2008

Grimey Drawer
The main benefit of having a little server like this is the option to have something running constantly to provide network services like pihole or music storage

LODGE NORTH
Jul 30, 2007

tjones posted:

I always recommend starting with a VM to slowly tackle projects/problems on it so that if anything goes south all you need to do is revert to a prior save state. You can figure out which distro you like, what types of packages you'll want and need, and the general howto of moving around and configuring it all. If you decide its not for you or you lose interest you're no worse for wear and can easily delete the VM.

After some time doing this, you'll have a good idea of what you want and whether you should setup a dual boot or separate machine proper.

However if you're actively working with Pi's for your job or projects, then I don't see the reason why another personal setup would be a bad idea.

I think my biggest hurdle right now is figuring out if it's worth learning Linux if I don't necessarily come across it on the daily basis and feel fine using other OSes. I think my main problem is that I had assumed or thought that "learning" Linux was in itself almost akin to learning PHP or C++ or some sort of code when it's ultimately more or less learning how to do things via a command line.

xzzy
Mar 5, 2009

Educating yourself is never a waste of time, if you wanna do it, do it.

It's generally better to have an itch or a specific goal you want to achieve though. My excuse is I wanted to play with making a MUD, and back in the bad old days that meant having a linux machine. It turned into a career admining linux servers. :v:

RFC2324
Jun 7, 2012

http 418

xzzy posted:

Educating yourself is never a waste of time, if you wanna do it, do it.

It's generally better to have an itch or a specific goal you want to achieve though. My excuse is I wanted to play with making a MUD, and back in the bad old days that meant having a linux machine. It turned into a career admining linux servers. :v:

:yossame:

Powered Descent
Jul 13, 2008

We haven't had that spirit here since 1969.

tjones posted:

I always recommend starting with a VM to slowly tackle projects/problems on it so that if anything goes south all you need to do is revert to a prior save state. You can figure out which distro you like, what types of packages you'll want and need, and the general howto of moving around and configuring it all. If you decide its not for you or you lose interest you're no worse for wear and can easily delete the VM.

I usually recommend the exact opposite: a "total immersion" strategy. Pick something like Ubuntu or Mint and install it on your everyday system, only booting back into Windows if there's something you absolutely can't figure out how to do in Linux.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved
Having more knowledge, especially in a field relevant to your profession or hobbies, is never a bad thing.

The Alpha Centauri
Feb 15, 2019
using Ubuntu for over a year now on one of my machines. I want to add CompTIA's Linux+ to my cert list/resume. Any recommendations on a book that will give well-rounded knowledge of the os?

OhFunny
Jun 26, 2013

EXTREMELY PISSED AT THE DNC
xfce has got it's first update in over 4 years

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

I've been running OpenSUSE tumbleweed on a new build with ryzen 2700x and GTX 1660, where I run a lot of stressful mprime (linux version of prime95) and CUDA workloads, and find myself trying to diagnose some stability issues.

I put in a NF-D15 cooler and thought maybe with this I could enable PBO for a few extra MHz. The bios confusingly has two PBO options, one main one in CPU config and another under "AMD CBS -> XFR" submenu iirc. Setting these both to Enabled seemed to make the system really unstable, where my display would frequently go black and require a hard reset. It also ran 10C hotter @ max Tdie ~70C vs 60C without.

So I've put the BIOS basically back to defaults, nothing "overclocked" except the appropriate DOCP(XMP) profile for my RAM (3000MT/s DDR4 2x8GB), put both PBO settings back from "Enabled" to default "Auto" (same as "Disabled", who knows?) and it seems maybe more stable, though I need to wait a day or two to really see if that's the case.

Anyways, all this instability had me trying to look at logs with journalctl and there are a bunch of errors that I'm wondering if I should be concerned about :

1) Every boot I get this message
code:
Aug 13 09:17:09 linux-ijaf kernel: Couldn't get size: 0x800000000000000e
I read this is related to secure boot, in some way, but I tried with it disabled in BIOS, and the only difference was that the error showed twice in that case, so I re-enabled it.

Side question: "linux-ijaf" was some random hostname that the installer set for me (somehow missed the option at the time), and I tried later setting hostname using "hostname" command and editing "/etc/hostname", but this message early in the boot process still shows the old name, while later in the log(from same boot, changed hostname many boots ago) it corrects itself to the newer name. Not sure if theres more I need to edit? Other messages below show the hostname "gypsy" which I've set manually.

2) Also see this once per boot:
code:
 
Aug 13 09:19:38 gypsy kernel: ucsi_ccg 0-0008: failed to reset PPM!
Aug 13 09:19:38 gypsy kernel: ucsi_ccg 0-0008: PPM init failed (-110)
No idea what any of that means.

3) Then I see a bunch of these types of errors, like about 1 or 2 per minute continuously as the system runs
code:
Aug 13 09:47:47 gypsy kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
Aug 13 09:47:47 gypsy kernel: pcieport 0000:00:03.1: AER: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
Aug 13 09:47:47 gypsy kernel: pcieport 0000:00:03.1: AER:   device [1022:1453] error status/mask=00001000/00006000
Aug 13 09:47:47 gypsy kernel: pcieport 0000:00:03.1: AER:    [12] Timeout               
Aug 13 09:47:50 gypsy kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
Aug 13 09:47:50 gypsy kernel: pcieport 0000:00:03.1: AER: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
Aug 13 09:47:50 gypsy kernel: pcieport 0000:00:03.1: AER:   device [1022:1453] error status/mask=00000040/00006000
Aug 13 09:47:50 gypsy kernel: pcieport 0000:00:03.1: AER:    [ 6] BadTLP   
Pretty sure this is the corresponding line from lspci which matches the port suffix from the log lines:
code:
00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
or maybe (probably not) this one? (idk, just slightly confused if i should ignore the leading zeroes in the log or the trailing zero in lspci here)
code:
03:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
Of course there are lots of other things in the log which might tell more about some of these issues, but these are all the scary red ones.

Anyone know if these message could be problematic or just red herrings?

SoftNum
Mar 31, 2011

peepsalot posted:



2) Also see this once per boot:
code:
 
Aug 13 09:19:38 gypsy kernel: ucsi_ccg 0-0008: failed to reset PPM!
Aug 13 09:19:38 gypsy kernel: ucsi_ccg 0-0008: PPM init failed (-110)
No idea what any of that means.


It (probably) means your USB-C ports aren't loading or initializing.

peepsalot posted:


3) Then I see a bunch of these types of errors, like about 1 or 2 per minute continuously as the system runs
code:
Aug 13 09:47:47 gypsy kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
Aug 13 09:47:47 gypsy kernel: pcieport 0000:00:03.1: AER: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
Aug 13 09:47:47 gypsy kernel: pcieport 0000:00:03.1: AER:   device [1022:1453] error status/mask=00001000/00006000
Aug 13 09:47:47 gypsy kernel: pcieport 0000:00:03.1: AER:    [12] Timeout               
Aug 13 09:47:50 gypsy kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
Aug 13 09:47:50 gypsy kernel: pcieport 0000:00:03.1: AER: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
Aug 13 09:47:50 gypsy kernel: pcieport 0000:00:03.1: AER:   device [1022:1453] error status/mask=00000040/00006000
Aug 13 09:47:50 gypsy kernel: pcieport 0000:00:03.1: AER:    [ 6] BadTLP   
Pretty sure this is the corresponding line from lspci which matches the port suffix from the log lines:
code:
00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
or maybe (probably not) this one? (idk, just slightly confused if i should ignore the leading zeroes in the log or the trailing zero in lspci here)
code:
03:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)
Of course there are lots of other things in the log which might tell more about some of these issues, but these are all the scary red ones.

Anyone know if these message could be problematic or just red herrings?

It's the first one. you can verify by looking up the vendor/device [1022:1453] somewhere like https://www.pcilookup.com/

Some googling around suggests that if this is the port where your graphics card is, it might be related to powersaving, which you can turn off:

https://forum.level1techs.com/t/threadripper-pcie-bus-errors/118977

(there's some other stuff in that thread that might apply it's probably worth reading some of it.)

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

SoftNum posted:

It (probably) means your USB-C ports aren't loading or initializing.
Well the board has a built-in USB c port on the back, which works fine for charging, and recognizing my phone. But it also has an unconnected header for USB 3.2 gen 2 or something, which my case doesn't support. Is it possible that it can tell there is nothing connected to that header, in which case I can try disabling those specific ports in BIOS. I didn't think that USB had any sort of active circuitry in the ports themselves that would make a difference if it were connected or not, but maybe USB C is special? OTOH if that's the issue, I guess it probably doesn't make any difference if I "fix" the error message anyways, since it seems pretty inconsequential.

SoftNum posted:

It's the first one. you can verify by looking up the vendor/device [1022:1453] somewhere like https://www.pcilookup.com/

Some googling around suggests that if this is the port where your graphics card is, it might be related to powersaving, which you can turn off:

https://forum.level1techs.com/t/threadripper-pcie-bus-errors/118977

(there's some other stuff in that thread that might apply it's probably worth reading some of it.)
OK, I added "pcie_aspm=off" to my grub boot config defaults and those messages are gone now, fantastic!

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I wasn't sure whether to put this in the backup thread or here, but I'm looking for a "Linuxy sysadmin" type of answer, so I'm going here:

Should I be thinking about tape backup or a spare hard disk for my VM snapshots?

I'm currently running a pair of home servers about 6 feet apart from each other. One's a Xeon with ECC RAM, which is more of a traditional server setup and the other is a Ryzen 1700 without ECC. Both are running CentOS 7 with ZFSonLinux modules installed and I have a script which runs through my LVM snapshots once a week. One server has the cronjob at 0300HRS on a Saturday morning and the the other has an almost identical cronjob at 0300HRS on a Sunday morning.

My scripts take a LVM snapshot, bzip the dumped snapshot and store it on their own ZFS array, then rsync the smaller bzipped dump file over to a "store_remote" directory on the opposing server. A bit like this directory structure on each side:

code:
/mnt
/mnt/LVM_snapshots
/mnt/LVM_snapshots/store
/mnt/LVM_snapshots/store_remote
My /mnt/LVM_snapshots directory is on a ZFS array on each side: one's a mirror and the other's in RAIDZ1.

So every weekend I have a fresh set of dumps of each LVM for the server and it's brother. If one server exploded I would have a full set of its dumps on the brother server and vice versa. I can never lose a VM unless either both ZFS arrays explode or I get hacked and someone rm -rf's both servers.

None of my VM's are critical if I lost them. My important stuff is backed up via separate cronjobs into a borgbackup archive and also rsynced to cloud storage. My important stuff that I couldn't afford to lose only amounts to about 4GB.

If both my servers were compromised or poo poo the bed then I would probably be really annoyed at the time it would take to replicate all of my VM's (currently 7 VM's on the Xeon box and 3 VM's on the Ryzen rig, but those are WIndows desktops, so bigger in size).

Should I be planning on backing up some more? Would an old HDD that is attached to a SATA port on one of the severs be enough, so that I have another cronjob that mounts it once a week, rsync's my "store" and "store_remote" directories to it and then unmounts it? Or is that not good enough?

Should I buy a cheap 500GB Western Digital USB drive from Amazon and manually, physically plug it in on a Sunday afternoon and do it myself? I think I've probably answered my own question there, actually. I'd probably forget to do it every Sunday, but 500GB would be more than enough and if the data was two or three weeks old, then meh.

Buying an LTO tape drive or something would be stupidly overkill, wouldn't it? I'm not even gonna entertain the idea of uploading 150 or 200GB's to the cloud once a week but something about my current setup feels like it's not quite bulletproof. Although it probably puts many other setups to shame (apart from the sort of people that post on this board, of course!).

Any tips that I could be doing anything better with my current setup?

EDIT: I think I've mentally nailed it:

  • Cheap-ish 1TB USB3 spinning drive from Amazon.
  • One single EXT4 partition on it.
  • I take note the UUID of the partition and put it in a bash script.
  • The bash script looks to see if the UUID is presently connected to the system.
  • If it is then it the partition is mounted, old files removed, rsyncs new ones then unmounts partition. If the UUID isn't present the script logs an error.
  • I put the script into a cronjob: 1200HRS Sunday.
  • When I get up on a Sunday morning I plug in the portable drive and unplug it on Sunday evening.
  • If i forget to plug the drive in I can manually run the script in a screen session.

apropos man fucked around with this message at 20:07 on Aug 13, 2019

Not Wolverine
Jul 1, 2007
I'm able to see my NAS now and transfer files, at a blistering fast 500KiB/s rate. My motherboard, an Asus P5QPL-AM has gigabit LAN and it's connected with a Cat 5E patch cable. When transferring files from my Windows 10 PC I can easily saturate the gigabit network on either sending or receiving files. The NAS is a RAID 5 of 2TB drives and my desktops all have SSDs so I'm fairly certain drive speed is not an issue. OS is kubuntu 18.04, how can I troubleshoot this?

astral
Apr 26, 2004

Crotch Fruit posted:

I'm able to see my NAS now and transfer files, at a blistering fast 500KiB/s rate. My motherboard, an Asus P5QPL-AM has gigabit LAN and it's connected with a Cat 5E patch cable. When transferring files from my Windows 10 PC I can easily saturate the gigabit network on either sending or receiving files. The NAS is a RAID 5 of 2TB drives and my desktops all have SSDs so I'm fairly certain drive speed is not an issue. OS is kubuntu 18.04, how can I troubleshoot this?

What version is it mounting as? Assuming it's an SMB/CIFS share.

edit: You can get a list by typing 'mount'; you're looking for a parameter that starts with 'vers='

astral fucked around with this message at 02:41 on Aug 14, 2019

Not Wolverine
Jul 1, 2007
Unless I'm reading this wrong, I'm not seeing anything that specifically looks like my SMB share when I issue the mount command. I suspect this could be because I am using Dolphin to browse the server? I also dual boot this PC with Windows 10 if that helps explain any odd mount points. I have not attempted to copy a file using Win 10 on this hardware, I can attempt to do so after my encoding job finishes.
code:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=1983572k,nr_inodes=495893,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=403636k,mode=755)
/dev/sda5 on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=15659)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=403636k,mode=700,uid=1000,gid=1000)
/dev/sdb1 on /media/logan/32GB USB type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)
/dev/sda2 on /media/logan/C256AD1156AD06EF type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)

RFC2324
Jun 7, 2012

http 418

apropos man posted:

I wasn't sure whether to put this in the backup thread or here, but I'm looking for a "Linuxy sysadmin" type of answer, so I'm going here:

Should I be thinking about tape backup or a spare hard disk for my VM snapshots?

I'm currently running a pair of home servers about 6 feet apart from each other. One's a Xeon with ECC RAM, which is more of a traditional server setup and the other is a Ryzen 1700 without ECC. Both are running CentOS 7 with ZFSonLinux modules installed and I have a script which runs through my LVM snapshots once a week. One server has the cronjob at 0300HRS on a Saturday morning and the the other has an almost identical cronjob at 0300HRS on a Sunday morning.

My scripts take a LVM snapshot, bzip the dumped snapshot and store it on their own ZFS array, then rsync the smaller bzipped dump file over to a "store_remote" directory on the opposing server. A bit like this directory structure on each side:

code:
/mnt
/mnt/LVM_snapshots
/mnt/LVM_snapshots/store
/mnt/LVM_snapshots/store_remote
My /mnt/LVM_snapshots directory is on a ZFS array on each side: one's a mirror and the other's in RAIDZ1.

So every weekend I have a fresh set of dumps of each LVM for the server and it's brother. If one server exploded I would have a full set of its dumps on the brother server and vice versa. I can never lose a VM unless either both ZFS arrays explode or I get hacked and someone rm -rf's both servers.

None of my VM's are critical if I lost them. My important stuff is backed up via separate cronjobs into a borgbackup archive and also rsynced to cloud storage. My important stuff that I couldn't afford to lose only amounts to about 4GB.

If both my servers were compromised or poo poo the bed then I would probably be really annoyed at the time it would take to replicate all of my VM's (currently 7 VM's on the Xeon box and 3 VM's on the Ryzen rig, but those are WIndows desktops, so bigger in size).

Should I be planning on backing up some more? Would an old HDD that is attached to a SATA port on one of the severs be enough, so that I have another cronjob that mounts it once a week, rsync's my "store" and "store_remote" directories to it and then unmounts it? Or is that not good enough?

Should I buy a cheap 500GB Western Digital USB drive from Amazon and manually, physically plug it in on a Sunday afternoon and do it myself? I think I've probably answered my own question there, actually. I'd probably forget to do it every Sunday, but 500GB would be more than enough and if the data was two or three weeks old, then meh.

Buying an LTO tape drive or something would be stupidly overkill, wouldn't it? I'm not even gonna entertain the idea of uploading 150 or 200GB's to the cloud once a week but something about my current setup feels like it's not quite bulletproof. Although it probably puts many other setups to shame (apart from the sort of people that post on this board, of course!).

Any tips that I could be doing anything better with my current setup?

EDIT: I think I've mentally nailed it:

  • Cheap-ish 1TB USB3 spinning drive from Amazon.
  • One single EXT4 partition on it.
  • I take note the UUID of the partition and put it in a bash script.
  • The bash script looks to see if the UUID is presently connected to the system.
  • If it is then it the partition is mounted, old files removed, rsyncs new ones then unmounts partition. If the UUID isn't present the script logs an error.
  • I put the script into a cronjob: 1200HRS Sunday.
  • When I get up on a Sunday morning I plug in the portable drive and unplug it on Sunday evening.
  • If i forget to plug the drive in I can manually run the script in a screen session.

My personal approach is a backed up data volume, and built out my network to rebuild itself via PXE/kickstart/puppet. If one of my VMs goes wonky for whatever reason I can just nuke it, and as long as the replacement has a matching MAC address it rebuilds itself in ~20 minutes. A few apps don't work real well with this(why is plex so sensitive?!?) but most of your linux services will happily just rebuild.

All you need at that point is to backup your data drive, network configs, and a good copy of your puppetmaster/PXE host to spin back up pretty quick.

If you need more redundancy in what you have for some reason you are looking at offsite backup, which you can do by uploading to some service, or using an LTO to carry a copy to somewhere offsite.

Ashex
Jun 25, 2007

These pipes are cleeeean!!!
I've got a funky thing where iwconfig reports that power save is disabled for the device but iw shows it enabled. I can disable it with iw but I don't know how to make it permanent and there doesn't seem to be any documentation around this.

Does anyone know how to make this change permanent in Fedora?

code:
iw dev wlp0s20u2 set power_save off

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

RFC2324 posted:

My personal approach is a backed up data volume, and built out my network to rebuild itself via PXE/kickstart/puppet. If one of my VMs goes wonky for whatever reason I can just nuke it, and as long as the replacement has a matching MAC address it rebuilds itself in ~20 minutes. A few apps don't work real well with this(why is plex so sensitive?!?) but most of your linux services will happily just rebuild.

All you need at that point is to backup your data drive, network configs, and a good copy of your puppetmaster/PXE host to spin back up pretty quick.

If you need more redundancy in what you have for some reason you are looking at offsite backup, which you can do by uploading to some service, or using an LTO to carry a copy to somewhere offsite.

Interesting. I really should get into puppet and more onto the automation side of things. Ever felt like switching over to Emby for serving up your video files? I use Jellyfin, which is a fork of Emby after the Emby devs had a tiff that Emby were going to start using proprietary codecs. I wasn't particularly bothered about the proprietary codec stuff but I gave Jellyfin a try in docker and it works a charm. I only use it for streaming torrented movies around my LAN: no outside access. They've really done a good job with Jellyfin, The UI isn't quite as slick as Plex but it works a treat. And due to it being open there's no signup necessary or logging into Plex.tv and generating a claim code that's unique to your server every time you wanna stop the container and docker pull the latest version. You just set up Jellyfin with persisitent storage so that it remembers your thumbnails, stop it every couple of weeks, pull the latest build and start it again. I use a one-liner bash script to start Jellyfin up which points it to my local storage.

xzzy
Mar 5, 2009

Puppet is pretty significant thing these days, but be aware it's a colossal rabbit hole that you might never escape.

astral
Apr 26, 2004

Crotch Fruit posted:

Unless I'm reading this wrong, I'm not seeing anything that specifically looks like my SMB share when I issue the mount command. I suspect this could be because I am using Dolphin to browse the server? I also dual boot this PC with Windows 10 if that helps explain any odd mount points. I have not attempted to copy a file using Win 10 on this hardware, I can attempt to do so after my encoding job finishes.


Does it have the same speed problem if you mount it yourself, instead of through dolphin?

And yes, ruling out the hardware by successfully testing under Windows is also a good step.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

xzzy posted:

Puppet is pretty significant thing these days, but be aware it's a colossal rabbit hole that you might never escape.

Ansible as an alternative to explore as well.

RFC2324
Jun 7, 2012

http 418

nem posted:

Ansible as an alternative to explore as well.

I'd look at all the tools like this. I don't regret choosing puppet, but i probably won't again when i redo the system. Hand writing orchestration scripts is a pain when the work really has already been done

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

I hadn't really thought much about motherboard compatibility with Linux until recently when I discovered my latest build doesn't have any official support for the SuperIO chip used in it, making diagnostic temp, voltage data etc difficult or impossible to read.

So I'm just wondering for future reference, if there is any particular brand or line of boards that make an effort to properly support Linux? I mean aside from like prebuilt System76 or whatever.

Hollow Talk
Feb 2, 2014

nem posted:

Ansible as an alternative to explore as well.

For the VM use case, puppet (or salt) are probably better, since they are agent based and you can just run the server on the host. Ansible isn't an agent system (unless you run Ansible tower or do stuff with Openstack etc), so the best you could do here is either trigger Ansible via script or run it in local mode inside the VM.

Infrastructure automation etc. is indeed a rabbit hole, but a worthwhile one.

I use a combination of cloud-init and Ansible both at work and at home.

Not Wolverine
Jul 1, 2007
I mounted the NAS with the mount command instead of browsing it in Dolphin, the speed is now a little over 100MiB/s. The only minor annoyance that remains is although I can browse the folder in Dolphin and browse to where I mounted my SMB folder, I don't have permission to copy files to the NAS. I suspect this could be related to running the mount command using sudo? I tried to do it without and it said only root could use the options command, which I used to specify the user name. I am now transferring files over to free up space using sudo cp -r ./Videos/<big folder> ./<file server mount point> but the down side is the command line doesn't give me a progress bar. The only reason I know the speed is because I looked at the network properties, previously when using Dolphin to copy files over (at a much slower rate) it would show a progress bar in the panel. The task manager panel would slow fill up with green as the copy progressed, no percentage or speed or any other information but better than nothing. I just want a little window with a progress bar like the way it has been since Windows 95. I assume there might be a setting to tweak in KDE to show a file copy window? At the very least, if I have to use the command line to copy files, is there a way I can get even a text based progress bar?

astral
Apr 26, 2004

Crotch Fruit posted:

I mounted the NAS with the mount command instead of browsing it in Dolphin, the speed is now a little over 100MiB/s. The only minor annoyance that remains is although I can browse the folder in Dolphin and browse to where I mounted my SMB folder, I don't have permission to copy files to the NAS. I suspect this could be related to running the mount command using sudo? I tried to do it without and it said only root could use the options command, which I used to specify the user name. I am now transferring files over to free up space using sudo cp -r ./Videos/<big folder> ./<file server mount point> but the down side is the command line doesn't give me a progress bar. The only reason I know the speed is because I looked at the network properties, previously when using Dolphin to copy files over (at a much slower rate) it would show a progress bar in the panel. The task manager panel would slow fill up with green as the copy progressed, no percentage or speed or any other information but better than nothing. I just want a little window with a progress bar like the way it has been since Windows 95. I assume there might be a setting to tweak in KDE to show a file copy window? At the very least, if I have to use the command line to copy files, is there a way I can get even a text based progress bar?

So the speed is solid that way? Great!

You can set uid/gid as parameters in your mount command, which should solve the permissions issue. Your uid/gid are likely 1000 since iirc you mentioned using ubuntu, but you can always doublecheck by running `id -u yourusernamehere` for userid and the same command with the `-g` flag instead for group id.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Hollow Talk posted:

For the VM use case, puppet (or salt) are probably better, since they are agent based and you can just run the server on the host. Ansible isn't an agent system (unless you run Ansible tower or do stuff with Openstack etc), so the best you could do here is either trigger Ansible via script or run it in local mode inside the VM.

I may be missing something? Ansible is agent-based (ssh as delivery) if you set hosts/groups otherwise it'll run locally.

Modules need to be used accordingly as to whether it's referring to assets on the target server or on the orchestrator.

Hollow Talk
Feb 2, 2014

nem posted:

I may be missing something? Ansible is agent-based (ssh as delivery) if you set hosts/groups otherwise it'll run locally.

Modules need to be used accordingly as to whether it's referring to assets on the target server or on the orchestrator.

Agent doesn't mean ssh-agent. Agent systems install an agent on a host that pulls configuration directions from a central server, which means you only deploy your definitions to the server. Ansible, in turn, actively pushes configurations to hosts by executing any steps remotely (local is basically the same things, only with a local shell).

Agent-based systems have the advantage that any new host can simply pull the current definitions from the server at any time or at specified intervals, whereas push-style tools need to be run/triggered in order to do anything.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Hollow Talk posted:

Agent doesn't mean ssh-agent. Agent systems install an agent on a host that pulls configuration directions from a central server, which means you only deploy your definitions to the server. Ansible, in turn, actively pushes configurations to hosts by executing any steps remotely (local is basically the same things, only with a local shell).

Agent-based systems have the advantage that any new host can simply pull the current definitions from the server at any time or at specified intervals, whereas push-style tools need to be run/triggered in order to do anything.

It's not hard to write. Use a centralized git repo, add a cron to do a nightly pull, if the mtime changes on the directory process the changes. It's the same process I use throughout all servers that participate in nightly updates with apnscp.

I refer to agents in terms of whether something can act on behalf of something else remotely, not so much active/passive push/pull varieties. In this case, yes not ssh-agent but rather using ssh as a delivery pipeline for Ansible on the server to process arbitrary code is what I refer to as an agent. Compare with an agentless approach in which Ansible would have to be run manually on the server processing the changes.

Hollow Talk
Feb 2, 2014

nem posted:

It's not hard to write. Use a centralized git repo, add a cron to do a nightly pull, if the mtime changes on the directory process the changes. It's the same process I use throughout all servers that participate in nightly updates with apnscp.

I refer to agents in terms of whether something can act on behalf of something else remotely, not so much active/passive push/pull varieties. In this case, yes not ssh-agent but rather using ssh as a delivery pipeline for Ansible on the server to process arbitrary code is what I refer to as an agent. Compare with an agentless approach in which Ansible would have to be run manually on the server processing the changes.

Sure, but why bother with Ansible at this point? I feel that this use case is exactly what tools like puppet cover by design. What's the advantage of using Ansible at this point?

I feel that by running Ansible locally, I would lose out on a number of features and security considerations, e.g. when using Ansible Vault, I would need credentials on the host, on top of giving every host access to git (and managing that access). I also can't do centralised certificate signing/pushing via my own CA.

vvvv "locally" refers to the target system that should be configured. I run it via SSH as well. vvvv

Hollow Talk fucked around with this message at 23:18 on Aug 14, 2019

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Wait what? I use ansible over SSH all the time at work.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Crotch Fruit posted:

I mounted the NAS with the mount command instead of browsing it in Dolphin, the speed is now a little over 100MiB/s. The only minor annoyance that remains is although I can browse the folder in Dolphin and browse to where I mounted my SMB folder, I don't have permission to copy files to the NAS. I suspect this could be related to running the mount command using sudo? I tried to do it without and it said only root could use the options command, which I used to specify the user name.

Create an entry for the mountpoint in /etc/fstab and give it the options "user" and "noauto", so you can mount it without sudo. It's been a while since I played with that, but you may also need to chown the mountpoint to your user. Either before or after mounting.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Hollow Talk posted:

Sure, but why bother with Ansible at this point? I feel that this use case is exactly what tools like puppet cover by design. What's the advantage of using Ansible at this point?

I feel that by running Ansible locally, I would lose out on a number of features and security considerations, e.g. when using Ansible Vault, I would need credentials on the host, on top of giving every host access to git (and managing that access). I also can't do centralised certificate signing/pushing via my own CA.

vvvv "locally" refers to the target system that should be configured. I run it via SSH as well. vvvv

One may prefer the tooling of x over y. Offering up an alternative as something doesn't preclude one from using the original suggestion. Besides, one may be more familiar with Ruby or Python at which point extending core features may need to be taken into consideration.

If you want to use a custom CA to guard git over HTTPS, just set the server up with a custom certificate, or better yet build an X509 licensing server that with the right request from an authorized subnet can acquire an SSL certificate. I do that via https://yum.apnscp.com. http:// variant doesn't have any CA restrictions but limited in what it serves.

Everything here is open-ended. Just build around whatever toolkit works for your use case and don't get regimented in thinking it has to be done a particular way because technology is always changing. Ansible and Puppet may very well be as relevant as "sendmail" 10 years from now.

CaptainSarcastic
Jul 6, 2013



peepsalot posted:

I hadn't really thought much about motherboard compatibility with Linux until recently when I discovered my latest build doesn't have any official support for the SuperIO chip used in it, making diagnostic temp, voltage data etc difficult or impossible to read.

So I'm just wondering for future reference, if there is any particular brand or line of boards that make an effort to properly support Linux? I mean aside from like prebuilt System76 or whatever.

I'd be interested to hear this, too. I really need a new computer, and was planning to build around a Ryzen CPU, but if there is a significant difference in mobo support it would be good to know.

Adbot
ADBOT LOVES YOU

RFC2324
Jun 7, 2012

http 418

nem posted:

Everything here is open-ended. Just build around whatever toolkit works for your use case and don't get regimented in thinking it has to be done a particular way because technology is always changing. Ansible and Puppet may very well be as relevant as "sendmail" 10 years from now.

I was surprised to hear puppet seconded, given last I heard it was considered old and superseded.

Guess it came back

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply