|
The mount.cifs command has some switch to specify the username, and if you don't, it defaults to the user that is running the command - which is probably why that failed. You can also connect "manually" in dolphin (or any KDE application) by typing in the address - it looks like an URL, but you can replace "http" with a number of other services. For Windows file shares, try smb://user@server.ip , optionally with /sharename. If you can ssh to a computer, you can also use fish:// or sftp:// or scp:// - I never quite know which of those will work best, but at least one of them will let you browse remote files over ssh.
|
# ? Aug 10, 2019 12:41 |
|
|
# ? May 29, 2024 17:48 |
|
Computer viking posted:You can also connect "manually" in dolphin (or any KDE application) by typing in the address - it looks like an URL, but you can replace "http" with a number of other services. It is a URL. The U stands for 'universal'. Http is a type of url which is why you specify Http instead of it being assumed.
|
# ? Aug 10, 2019 15:27 |
|
Running into a weird issue after upgrading from Fedora 29 to 30. After upgrading the login screen didn't appear when booting and when pressing ctrl+\ I saw that the boot process got stuck after code:
The good news is the OpenSSH server is started so I can ssh into it and see how to fix it from there. I'm not sure where to look though, boot.log doesn't show any info other than that it stopped after the Network Mangager Script dispatcher service. And neither does /var/log/messages or the Xorg logs. After upgrading from 28 to 29 I also had an issue with xfce not starting which was resolved by systemctl set-default graphical.target. That didn't resolve my issues now though. Not sure where to look next, anyone have an idea?
|
# ? Aug 10, 2019 16:08 |
|
So, getting two Pis for two different projects and successfully messing with both of them has made me want to learn Linux. As far as learning goes, I know I could install Linux on my computer itself and just go that way, but are there benefits to having something like a Zero W solely for learning, utilizing, things like that?
|
# ? Aug 10, 2019 22:06 |
|
LODGE NORTH posted:So, getting two Pis for two different projects and successfully messing with both of them has made me want to learn Linux. I always recommend starting with a VM to slowly tackle projects/problems on it so that if anything goes south all you need to do is revert to a prior save state. You can figure out which distro you like, what types of packages you'll want and need, and the general howto of moving around and configuring it all. If you decide its not for you or you lose interest you're no worse for wear and can easily delete the VM. After some time doing this, you'll have a good idea of what you want and whether you should setup a dual boot or separate machine proper. However if you're actively working with Pi's for your job or projects, then I don't see the reason why another personal setup would be a bad idea.
|
# ? Aug 11, 2019 18:00 |
|
The main benefit of having a little server like this is the option to have something running constantly to provide network services like pihole or music storage
|
# ? Aug 11, 2019 21:10 |
|
tjones posted:I always recommend starting with a VM to slowly tackle projects/problems on it so that if anything goes south all you need to do is revert to a prior save state. You can figure out which distro you like, what types of packages you'll want and need, and the general howto of moving around and configuring it all. If you decide its not for you or you lose interest you're no worse for wear and can easily delete the VM. I think my biggest hurdle right now is figuring out if it's worth learning Linux if I don't necessarily come across it on the daily basis and feel fine using other OSes. I think my main problem is that I had assumed or thought that "learning" Linux was in itself almost akin to learning PHP or C++ or some sort of code when it's ultimately more or less learning how to do things via a command line.
|
# ? Aug 12, 2019 01:20 |
|
Educating yourself is never a waste of time, if you wanna do it, do it. It's generally better to have an itch or a specific goal you want to achieve though. My excuse is I wanted to play with making a MUD, and back in the bad old days that meant having a linux machine. It turned into a career admining linux servers.
|
# ? Aug 12, 2019 02:59 |
|
xzzy posted:Educating yourself is never a waste of time, if you wanna do it, do it.
|
# ? Aug 12, 2019 03:08 |
|
tjones posted:I always recommend starting with a VM to slowly tackle projects/problems on it so that if anything goes south all you need to do is revert to a prior save state. You can figure out which distro you like, what types of packages you'll want and need, and the general howto of moving around and configuring it all. If you decide its not for you or you lose interest you're no worse for wear and can easily delete the VM. I usually recommend the exact opposite: a "total immersion" strategy. Pick something like Ubuntu or Mint and install it on your everyday system, only booting back into Windows if there's something you absolutely can't figure out how to do in Linux.
|
# ? Aug 12, 2019 05:49 |
|
Having more knowledge, especially in a field relevant to your profession or hobbies, is never a bad thing.
|
# ? Aug 12, 2019 20:45 |
using Ubuntu for over a year now on one of my machines. I want to add CompTIA's Linux+ to my cert list/resume. Any recommendations on a book that will give well-rounded knowledge of the os?
|
|
# ? Aug 13, 2019 03:12 |
|
xfce has got it's first update in over 4 years
|
# ? Aug 13, 2019 04:16 |
|
I've been running OpenSUSE tumbleweed on a new build with ryzen 2700x and GTX 1660, where I run a lot of stressful mprime (linux version of prime95) and CUDA workloads, and find myself trying to diagnose some stability issues. I put in a NF-D15 cooler and thought maybe with this I could enable PBO for a few extra MHz. The bios confusingly has two PBO options, one main one in CPU config and another under "AMD CBS -> XFR" submenu iirc. Setting these both to Enabled seemed to make the system really unstable, where my display would frequently go black and require a hard reset. It also ran 10C hotter @ max Tdie ~70C vs 60C without. So I've put the BIOS basically back to defaults, nothing "overclocked" except the appropriate DOCP(XMP) profile for my RAM (3000MT/s DDR4 2x8GB), put both PBO settings back from "Enabled" to default "Auto" (same as "Disabled", who knows?) and it seems maybe more stable, though I need to wait a day or two to really see if that's the case. Anyways, all this instability had me trying to look at logs with journalctl and there are a bunch of errors that I'm wondering if I should be concerned about : 1) Every boot I get this message code:
Side question: "linux-ijaf" was some random hostname that the installer set for me (somehow missed the option at the time), and I tried later setting hostname using "hostname" command and editing "/etc/hostname", but this message early in the boot process still shows the old name, while later in the log(from same boot, changed hostname many boots ago) it corrects itself to the newer name. Not sure if theres more I need to edit? Other messages below show the hostname "gypsy" which I've set manually. 2) Also see this once per boot: code:
3) Then I see a bunch of these types of errors, like about 1 or 2 per minute continuously as the system runs code:
code:
code:
Anyone know if these message could be problematic or just red herrings?
|
# ? Aug 13, 2019 16:51 |
|
peepsalot posted:
It (probably) means your USB-C ports aren't loading or initializing. peepsalot posted:
It's the first one. you can verify by looking up the vendor/device [1022:1453] somewhere like https://www.pcilookup.com/ Some googling around suggests that if this is the port where your graphics card is, it might be related to powersaving, which you can turn off: https://forum.level1techs.com/t/threadripper-pcie-bus-errors/118977 (there's some other stuff in that thread that might apply it's probably worth reading some of it.)
|
# ? Aug 13, 2019 17:26 |
|
SoftNum posted:It (probably) means your USB-C ports aren't loading or initializing. SoftNum posted:It's the first one. you can verify by looking up the vendor/device [1022:1453] somewhere like https://www.pcilookup.com/
|
# ? Aug 13, 2019 19:06 |
|
I wasn't sure whether to put this in the backup thread or here, but I'm looking for a "Linuxy sysadmin" type of answer, so I'm going here: Should I be thinking about tape backup or a spare hard disk for my VM snapshots? I'm currently running a pair of home servers about 6 feet apart from each other. One's a Xeon with ECC RAM, which is more of a traditional server setup and the other is a Ryzen 1700 without ECC. Both are running CentOS 7 with ZFSonLinux modules installed and I have a script which runs through my LVM snapshots once a week. One server has the cronjob at 0300HRS on a Saturday morning and the the other has an almost identical cronjob at 0300HRS on a Sunday morning. My scripts take a LVM snapshot, bzip the dumped snapshot and store it on their own ZFS array, then rsync the smaller bzipped dump file over to a "store_remote" directory on the opposing server. A bit like this directory structure on each side: code:
So every weekend I have a fresh set of dumps of each LVM for the server and it's brother. If one server exploded I would have a full set of its dumps on the brother server and vice versa. I can never lose a VM unless either both ZFS arrays explode or I get hacked and someone rm -rf's both servers. None of my VM's are critical if I lost them. My important stuff is backed up via separate cronjobs into a borgbackup archive and also rsynced to cloud storage. My important stuff that I couldn't afford to lose only amounts to about 4GB. If both my servers were compromised or poo poo the bed then I would probably be really annoyed at the time it would take to replicate all of my VM's (currently 7 VM's on the Xeon box and 3 VM's on the Ryzen rig, but those are WIndows desktops, so bigger in size). Should I be planning on backing up some more? Would an old HDD that is attached to a SATA port on one of the severs be enough, so that I have another cronjob that mounts it once a week, rsync's my "store" and "store_remote" directories to it and then unmounts it? Or is that not good enough? Should I buy a cheap 500GB Western Digital USB drive from Amazon and manually, physically plug it in on a Sunday afternoon and do it myself? I think I've probably answered my own question there, actually. I'd probably forget to do it every Sunday, but 500GB would be more than enough and if the data was two or three weeks old, then meh. Buying an LTO tape drive or something would be stupidly overkill, wouldn't it? I'm not even gonna entertain the idea of uploading 150 or 200GB's to the cloud once a week but something about my current setup feels like it's not quite bulletproof. Although it probably puts many other setups to shame (apart from the sort of people that post on this board, of course!). Any tips that I could be doing anything better with my current setup? EDIT: I think I've mentally nailed it:
apropos man fucked around with this message at 20:07 on Aug 13, 2019 |
# ? Aug 13, 2019 19:37 |
|
I'm able to see my NAS now and transfer files, at a blistering fast 500KiB/s rate. My motherboard, an Asus P5QPL-AM has gigabit LAN and it's connected with a Cat 5E patch cable. When transferring files from my Windows 10 PC I can easily saturate the gigabit network on either sending or receiving files. The NAS is a RAID 5 of 2TB drives and my desktops all have SSDs so I'm fairly certain drive speed is not an issue. OS is kubuntu 18.04, how can I troubleshoot this?
|
# ? Aug 13, 2019 21:20 |
|
Crotch Fruit posted:I'm able to see my NAS now and transfer files, at a blistering fast 500KiB/s rate. My motherboard, an Asus P5QPL-AM has gigabit LAN and it's connected with a Cat 5E patch cable. When transferring files from my Windows 10 PC I can easily saturate the gigabit network on either sending or receiving files. The NAS is a RAID 5 of 2TB drives and my desktops all have SSDs so I'm fairly certain drive speed is not an issue. OS is kubuntu 18.04, how can I troubleshoot this? What version is it mounting as? Assuming it's an SMB/CIFS share. edit: You can get a list by typing 'mount'; you're looking for a parameter that starts with 'vers=' astral fucked around with this message at 02:41 on Aug 14, 2019 |
# ? Aug 14, 2019 02:36 |
|
Unless I'm reading this wrong, I'm not seeing anything that specifically looks like my SMB share when I issue the mount command. I suspect this could be because I am using Dolphin to browse the server? I also dual boot this PC with Windows 10 if that helps explain any odd mount points. I have not attempted to copy a file using Win 10 on this hardware, I can attempt to do so after my encoding job finishes.code:
|
# ? Aug 14, 2019 03:12 |
|
apropos man posted:I wasn't sure whether to put this in the backup thread or here, but I'm looking for a "Linuxy sysadmin" type of answer, so I'm going here: My personal approach is a backed up data volume, and built out my network to rebuild itself via PXE/kickstart/puppet. If one of my VMs goes wonky for whatever reason I can just nuke it, and as long as the replacement has a matching MAC address it rebuilds itself in ~20 minutes. A few apps don't work real well with this(why is plex so sensitive?!?) but most of your linux services will happily just rebuild. All you need at that point is to backup your data drive, network configs, and a good copy of your puppetmaster/PXE host to spin back up pretty quick. If you need more redundancy in what you have for some reason you are looking at offsite backup, which you can do by uploading to some service, or using an LTO to carry a copy to somewhere offsite.
|
# ? Aug 14, 2019 07:41 |
|
I've got a funky thing where iwconfig reports that power save is disabled for the device but iw shows it enabled. I can disable it with iw but I don't know how to make it permanent and there doesn't seem to be any documentation around this. Does anyone know how to make this change permanent in Fedora? code:
|
# ? Aug 14, 2019 11:37 |
|
RFC2324 posted:My personal approach is a backed up data volume, and built out my network to rebuild itself via PXE/kickstart/puppet. If one of my VMs goes wonky for whatever reason I can just nuke it, and as long as the replacement has a matching MAC address it rebuilds itself in ~20 minutes. A few apps don't work real well with this(why is plex so sensitive?!?) but most of your linux services will happily just rebuild. Interesting. I really should get into puppet and more onto the automation side of things. Ever felt like switching over to Emby for serving up your video files? I use Jellyfin, which is a fork of Emby after the Emby devs had a tiff that Emby were going to start using proprietary codecs. I wasn't particularly bothered about the proprietary codec stuff but I gave Jellyfin a try in docker and it works a charm. I only use it for streaming torrented movies around my LAN: no outside access. They've really done a good job with Jellyfin, The UI isn't quite as slick as Plex but it works a treat. And due to it being open there's no signup necessary or logging into Plex.tv and generating a claim code that's unique to your server every time you wanna stop the container and docker pull the latest version. You just set up Jellyfin with persisitent storage so that it remembers your thumbnails, stop it every couple of weeks, pull the latest build and start it again. I use a one-liner bash script to start Jellyfin up which points it to my local storage.
|
# ? Aug 14, 2019 14:42 |
|
Puppet is pretty significant thing these days, but be aware it's a colossal rabbit hole that you might never escape.
|
# ? Aug 14, 2019 14:52 |
|
Crotch Fruit posted:Unless I'm reading this wrong, I'm not seeing anything that specifically looks like my SMB share when I issue the mount command. I suspect this could be because I am using Dolphin to browse the server? I also dual boot this PC with Windows 10 if that helps explain any odd mount points. I have not attempted to copy a file using Win 10 on this hardware, I can attempt to do so after my encoding job finishes. Does it have the same speed problem if you mount it yourself, instead of through dolphin? And yes, ruling out the hardware by successfully testing under Windows is also a good step.
|
# ? Aug 14, 2019 18:00 |
|
xzzy posted:Puppet is pretty significant thing these days, but be aware it's a colossal rabbit hole that you might never escape. Ansible as an alternative to explore as well.
|
# ? Aug 14, 2019 18:59 |
|
nem posted:Ansible as an alternative to explore as well. I'd look at all the tools like this. I don't regret choosing puppet, but i probably won't again when i redo the system. Hand writing orchestration scripts is a pain when the work really has already been done
|
# ? Aug 14, 2019 19:17 |
|
I hadn't really thought much about motherboard compatibility with Linux until recently when I discovered my latest build doesn't have any official support for the SuperIO chip used in it, making diagnostic temp, voltage data etc difficult or impossible to read. So I'm just wondering for future reference, if there is any particular brand or line of boards that make an effort to properly support Linux? I mean aside from like prebuilt System76 or whatever.
|
# ? Aug 14, 2019 20:20 |
|
nem posted:Ansible as an alternative to explore as well. For the VM use case, puppet (or salt) are probably better, since they are agent based and you can just run the server on the host. Ansible isn't an agent system (unless you run Ansible tower or do stuff with Openstack etc), so the best you could do here is either trigger Ansible via script or run it in local mode inside the VM. Infrastructure automation etc. is indeed a rabbit hole, but a worthwhile one. I use a combination of cloud-init and Ansible both at work and at home.
|
# ? Aug 14, 2019 20:42 |
|
I mounted the NAS with the mount command instead of browsing it in Dolphin, the speed is now a little over 100MiB/s. The only minor annoyance that remains is although I can browse the folder in Dolphin and browse to where I mounted my SMB folder, I don't have permission to copy files to the NAS. I suspect this could be related to running the mount command using sudo? I tried to do it without and it said only root could use the options command, which I used to specify the user name. I am now transferring files over to free up space using sudo cp -r ./Videos/<big folder> ./<file server mount point> but the down side is the command line doesn't give me a progress bar. The only reason I know the speed is because I looked at the network properties, previously when using Dolphin to copy files over (at a much slower rate) it would show a progress bar in the panel. The task manager panel would slow fill up with green as the copy progressed, no percentage or speed or any other information but better than nothing. I just want a little window with a progress bar like the way it has been since Windows 95. I assume there might be a setting to tweak in KDE to show a file copy window? At the very least, if I have to use the command line to copy files, is there a way I can get even a text based progress bar?
|
# ? Aug 14, 2019 21:15 |
|
Crotch Fruit posted:I mounted the NAS with the mount command instead of browsing it in Dolphin, the speed is now a little over 100MiB/s. The only minor annoyance that remains is although I can browse the folder in Dolphin and browse to where I mounted my SMB folder, I don't have permission to copy files to the NAS. I suspect this could be related to running the mount command using sudo? I tried to do it without and it said only root could use the options command, which I used to specify the user name. I am now transferring files over to free up space using sudo cp -r ./Videos/<big folder> ./<file server mount point> but the down side is the command line doesn't give me a progress bar. The only reason I know the speed is because I looked at the network properties, previously when using Dolphin to copy files over (at a much slower rate) it would show a progress bar in the panel. The task manager panel would slow fill up with green as the copy progressed, no percentage or speed or any other information but better than nothing. I just want a little window with a progress bar like the way it has been since Windows 95. I assume there might be a setting to tweak in KDE to show a file copy window? At the very least, if I have to use the command line to copy files, is there a way I can get even a text based progress bar? So the speed is solid that way? Great! You can set uid/gid as parameters in your mount command, which should solve the permissions issue. Your uid/gid are likely 1000 since iirc you mentioned using ubuntu, but you can always doublecheck by running `id -u yourusernamehere` for userid and the same command with the `-g` flag instead for group id.
|
# ? Aug 14, 2019 21:32 |
|
Hollow Talk posted:For the VM use case, puppet (or salt) are probably better, since they are agent based and you can just run the server on the host. Ansible isn't an agent system (unless you run Ansible tower or do stuff with Openstack etc), so the best you could do here is either trigger Ansible via script or run it in local mode inside the VM. I may be missing something? Ansible is agent-based (ssh as delivery) if you set hosts/groups otherwise it'll run locally. Modules need to be used accordingly as to whether it's referring to assets on the target server or on the orchestrator.
|
# ? Aug 14, 2019 22:04 |
|
nem posted:I may be missing something? Ansible is agent-based (ssh as delivery) if you set hosts/groups otherwise it'll run locally. Agent doesn't mean ssh-agent. Agent systems install an agent on a host that pulls configuration directions from a central server, which means you only deploy your definitions to the server. Ansible, in turn, actively pushes configurations to hosts by executing any steps remotely (local is basically the same things, only with a local shell). Agent-based systems have the advantage that any new host can simply pull the current definitions from the server at any time or at specified intervals, whereas push-style tools need to be run/triggered in order to do anything.
|
# ? Aug 14, 2019 22:13 |
|
Hollow Talk posted:Agent doesn't mean ssh-agent. Agent systems install an agent on a host that pulls configuration directions from a central server, which means you only deploy your definitions to the server. Ansible, in turn, actively pushes configurations to hosts by executing any steps remotely (local is basically the same things, only with a local shell). It's not hard to write. Use a centralized git repo, add a cron to do a nightly pull, if the mtime changes on the directory process the changes. It's the same process I use throughout all servers that participate in nightly updates with apnscp. I refer to agents in terms of whether something can act on behalf of something else remotely, not so much active/passive push/pull varieties. In this case, yes not ssh-agent but rather using ssh as a delivery pipeline for Ansible on the server to process arbitrary code is what I refer to as an agent. Compare with an agentless approach in which Ansible would have to be run manually on the server processing the changes.
|
# ? Aug 14, 2019 22:21 |
|
nem posted:It's not hard to write. Use a centralized git repo, add a cron to do a nightly pull, if the mtime changes on the directory process the changes. It's the same process I use throughout all servers that participate in nightly updates with apnscp. Sure, but why bother with Ansible at this point? I feel that this use case is exactly what tools like puppet cover by design. What's the advantage of using Ansible at this point? I feel that by running Ansible locally, I would lose out on a number of features and security considerations, e.g. when using Ansible Vault, I would need credentials on the host, on top of giving every host access to git (and managing that access). I also can't do centralised certificate signing/pushing via my own CA. vvvv "locally" refers to the target system that should be configured. I run it via SSH as well. vvvv Hollow Talk fucked around with this message at 23:18 on Aug 14, 2019 |
# ? Aug 14, 2019 22:58 |
|
Wait what? I use ansible over SSH all the time at work.
|
# ? Aug 14, 2019 23:07 |
|
Crotch Fruit posted:I mounted the NAS with the mount command instead of browsing it in Dolphin, the speed is now a little over 100MiB/s. The only minor annoyance that remains is although I can browse the folder in Dolphin and browse to where I mounted my SMB folder, I don't have permission to copy files to the NAS. I suspect this could be related to running the mount command using sudo? I tried to do it without and it said only root could use the options command, which I used to specify the user name. Create an entry for the mountpoint in /etc/fstab and give it the options "user" and "noauto", so you can mount it without sudo. It's been a while since I played with that, but you may also need to chown the mountpoint to your user. Either before or after mounting.
|
# ? Aug 14, 2019 23:25 |
|
Hollow Talk posted:Sure, but why bother with Ansible at this point? I feel that this use case is exactly what tools like puppet cover by design. What's the advantage of using Ansible at this point? One may prefer the tooling of x over y. Offering up an alternative as something doesn't preclude one from using the original suggestion. Besides, one may be more familiar with Ruby or Python at which point extending core features may need to be taken into consideration. If you want to use a custom CA to guard git over HTTPS, just set the server up with a custom certificate, or better yet build an X509 licensing server that with the right request from an authorized subnet can acquire an SSL certificate. I do that via https://yum.apnscp.com. http:// variant doesn't have any CA restrictions but limited in what it serves. Everything here is open-ended. Just build around whatever toolkit works for your use case and don't get regimented in thinking it has to be done a particular way because technology is always changing. Ansible and Puppet may very well be as relevant as "sendmail" 10 years from now.
|
# ? Aug 15, 2019 00:19 |
|
peepsalot posted:I hadn't really thought much about motherboard compatibility with Linux until recently when I discovered my latest build doesn't have any official support for the SuperIO chip used in it, making diagnostic temp, voltage data etc difficult or impossible to read. I'd be interested to hear this, too. I really need a new computer, and was planning to build around a Ryzen CPU, but if there is a significant difference in mobo support it would be good to know.
|
# ? Aug 15, 2019 06:57 |
|
|
# ? May 29, 2024 17:48 |
|
nem posted:Everything here is open-ended. Just build around whatever toolkit works for your use case and don't get regimented in thinking it has to be done a particular way because technology is always changing. Ansible and Puppet may very well be as relevant as "sendmail" 10 years from now. I was surprised to hear puppet seconded, given last I heard it was considered old and superseded. Guess it came back
|
# ? Aug 15, 2019 08:21 |