|
Not saying you are a newbie, but you just described the linux experience, but with less swearing than most
|
# ? Aug 24, 2022 16:17 |
|
|
# ? Jun 1, 2024 21:47 |
|
Why is SMB(?) the dominant network file sharing protocol? I don't really know much about it or alternatives; I'm just curious.
|
# ? Aug 24, 2022 17:11 |
|
Its the windows file sharing protocol. If you aren't trying to work with a windows box, nfs is the standard, and there are plenty more options. I seem to recall one or 2 distros defaulting to samba for desktop shares, but I don't think its the standard Also, I think the protocol is actually CIFS now, if you ever need to get that granular
|
# ? Aug 24, 2022 17:21 |
|
Trying to figure out a frequent issue with my Linux Mint install - occasionally (like at least once an hour), things just seem to end up... delayed. This is most obvious when I click a link in my web browser or open a new tab - sometimes, it just takes a few seconds (5-7) for the page to load and populate with content. This also happens when loading local HTML files into my web browser (I teach web development, I'm opening a lot of HTML files on my disk for lecture purposes), and in other scenarios too - for example, sometimes clicking a link in the Discord app will cause my system to pause for a few seconds before my browser pops open a new tab and loads the link. I feel like I've also noticed it happen occasionally when opening software on the machine, but generally speaking I open everything I need at the start of my workday and rarely close it. This has been happening for at least a few months now - I can't recall a specific time it started or anything to help narrow it down. I've tried looking in the Logs app that Mint offers, but I can't seem to find anything that's suggestive of the problem; it doesn't happen in my Windows install on the same machine, so I don't think it's an issue with like, my networking hardware (and again, it seems to occur with opening local files as well). I haven't tried anything like setting up a new user account to test in - work's been really busy. My next step is going to be to change my default browser to see if it's a Chrome issue (really need to make the switch eventually anyways) but I'm wondering if there's anywhere else I can look for log files to figure this out? e: just caught VS Code causing a core dump when I opened it, that's part of my usual applications I use for work, that may be part of the issue death cob for cutie fucked around with this message at 21:22 on Aug 24, 2022 |
# ? Aug 24, 2022 17:23 |
|
RFC2324 posted:Not saying you are a newbie Eh, no insult. I'd call myself a newbie too. Just in a weird sort of way where I've been a newbie for over 2 decades of occasional loving around with dual-boots, and doing stuff with routers and a pi. RFC2324 posted:but you just described the linux experience, but with less swearing than most Oh gently caress, there's a different OS I could be using that has really good documentation that is always up-to-date for all its internals? I should be using that! RFC2324 posted:I seem to recall one or 2 distros defaulting to samba for desktop shares, but I don't think its the standard KDE Dolphin defaults to samba usershares when you want to share folders. SMB really does seem to have advantages for casual, user-level, basic filesharing even in a non-windows environment. Like encryption by default and an emphasis on browsable temporary connections. I looked at doing NFS instead of samba back when I was setting up and everything about NFS seemed like if you weren't mounting a filesystem it was kinda a pain.
|
# ? Aug 24, 2022 18:12 |
|
Music Theory posted:Why is SMB(?) the dominant network file sharing protocol? The other big one on the *nix side is traditionally NFS, which works differently in ways that can be really useful and really annoying. The first big one is that while you log into an SMB mount with a username and password, NFS mounts are typically only restricted by IP networks or single IPs - but you only mount them once per client, and then the different users on the client get mapped to the corresponding user on the server. This is neat in large networks: Mount /home or /bigdata from a server to a bunch of clients and/or other servers, and the users/permissions will work as if they were normal local filesystems. The downside is that the users and groups need to exist and agree between the server and all the clients. Traditionally, how NFS solved this was by everything using the user/group ID number, and trusting the clients. So if a file is owned by user 1000 on the server, and you mount that on a client, user 1000 on the client owns it. If you have random untrusted clients, this is obviously a massive gaping security hole, perhaps better treated as "nobody built a fence in the first place" than a hole. On a network where you do trust the clients, you still need to make sure the same user gets the same user ID number on every machine - typically with some sort of network based login, though you can match them up by hand if you only have a couple of users. The more modern solution, introduced with NFSv4 - though you can use NFSv4 with the old system - is to use the username instead of the id number internally. This uses a kerberos server as a third party - which is the kind of thing kerberos is for. Simplified, a user on the client side talks to the kerberos server to get a token that says "I really am viking@home.net trying to talk to fileserver.home.net", and send that token along. The server can verify that this is genuine without having to talk to the kerberos server every time, and if it checks out, it will allow the request. Each user on the client side does this separately (and automatically...). Obviously, this needs working kerberos. It's nice when it works, though; you get the same "this is a normal-feeling file system with normal users and permissions" as in the old system, but the users are securely authenticated. Also massively overkill for a "NAS and a laptop" kind of scenario. NFS has another completely unrelated annoyance/benefit over SMB: It's meant to handle transient outages, and does it by freezing all IO to the mounted file system until the server comes back, with a timeout somewhere between "many minutes" and "never". On the upside, this means you can unplug a cable or reboot the server, and eventually things will just work out without the programs being aware; from their perspective it was just a read or write that happened to take a very long time. On the downside, this means that if you didn't use the right mount options, it's easier to forcibly reboot your PC than to try and un-stick an NFS mount that disappeared. And configuring nfsv4+kerberos is deep magic with an absolute jungle of bad/outdated/for the wrong version of kerberos guides, and software that thinks "silently hang even with full debug flags" is a fine way to communicate their unhappiness. Or at least that was my experience some years ago. Of the other ones, most of the network file systems I've heard of are for fancy cluster things - basically gluing together multiple servers to present what looks like a single network mountable file system, typically with enough redundancy to tolerate one or more servers disappearing. I think at least one of them uses NFS in the background? It's not anything I've ever knowingly used, let alone configured, but I bet there's people here that have. Computer viking fucked around with this message at 18:19 on Aug 24, 2022 |
# ? Aug 24, 2022 18:16 |
|
Just use NFS and setup syncthing for whatever files you need to share with windows op. Also doesn't windows support nfs now?
|
# ? Aug 24, 2022 18:45 |
|
Mr. Crow posted:Also doesn't windows support nfs now? Yes but only as mounts, not the easy temporary shares of SMB. Also android file manager apps with NFS support are not common. edit: also just to be clear, this isn't a critically important problem, just moderately annoying and weird as hell Klyith fucked around with this message at 20:07 on Aug 24, 2022 |
# ? Aug 24, 2022 19:20 |
|
And iOS has built in support for SMB.
|
# ? Aug 24, 2022 19:43 |
|
Music Theory posted:Why is SMB(?) the dominant network file sharing protocol?
|
# ? Aug 24, 2022 20:10 |
|
RFC2324 posted:Also, I think the protocol is actually CIFS now, if you ever need to get that granular My understanding is CIFS was an attempted renaming of SMB1 in the late 90s, but it didn't take hold officially. SMB became the official name with SMB 2.0.
|
# ? Aug 24, 2022 22:26 |
|
An nvidia driver update broke my Fedora again and the usual fix didn't take. So I took the opportunity to reinstall and revisit parts of my setup. I tried out systemd-automount for my NAS shares and auxiliary drives, I just threw everything directly into fstab before. Don't think I realized how much that might have impacted boot times. With a fresh install it could be anything and I don't have any old logs to compare, but it is noticeably quicker now. Also I've been taking every change and making sure it's in my lil ansible playbook. It's not complete or particularly well organized, but it did get all my repos/packages/flatpaks reinstalled and my dev environment set up, and now it's doing much more. Still had my old home folder, but it'll clone my dotfiles repo and stow into place (but maybe I should check out that new git method), set my usual dconf preferences/theme, download and install the fonts I like that aren't packaged, etc. I used to a bit of this with a shell script, but as bad as yaml is, ansible does a much better job than I care to with bash. It is a lot of work, can't imagine the poster who balked at backing up dotfiles would be into it, but I just like having the peace of mind. Obviously for the 'geforce at my system' recovery scenario, and it is fully sick to be able to turn a fresh/broken system into one set up exactly how I'd want it in a matter of minutes. But it's also nice having a written record of everything, the whole infrastructure as code thing. I don't want to have to remember every seldom used but super handy utility and 'oh not having X package makes building against Y fussy for unclear reasons' kind of bump you might hit.
|
# ? Aug 25, 2022 00:50 |
|
Have a error logcode:
Anyways that seems unrelated to my problem. After wiping /var/lib/samba and reinstalling samba with a super-basic guest-only .conf, it still happens. So I'm writing this off to bleeding edge distro problems. Ignore it and maybe it'll go away.
|
# ? Aug 25, 2022 01:10 |
|
Computer viking posted:The other big one on the *nix side is traditionally NFS, which works differently in ways that can be really useful and really annoying. Name and post is perfect. I love you E: Can I pay you under the table to write docs for me?
|
# ? Aug 25, 2022 03:43 |
|
Saukkis posted:My understanding is CIFS was an attempted renaming of SMB1 in the late 90s, but it didn't take hold officially. SMB became the official name with SMB 2.0. gfdi, why do I see poo poo refer to CIFS all the time? is it just because its reverse compat? Computer viking posted:
my company uses gluster, and its hilarious how often its "no, really, its a gluster filesystem(that has a cron rsync that runs every minute, and all the servers access via nfs anyway)"
|
# ? Aug 25, 2022 05:38 |
Music Theory posted:Why is SMB(?) the dominant network file sharing protocol? That being said though, there's a lot more servers than there are workstations/desktops/laptops - so even if only a small percentage of them use NFS, it's not completely unreasonable that NFS might get more use in total. NFSv4 was also made to be explicitly compatible with Windows, macOS, and Unix-like OS' - specifically, the ACLs were readjusted to make sense, so that when you're doing NFSv4-only sharing, you can set your storage to use NFSv4 ACLs and everything will just work (well, except if you run Linux, as it's still missing support for NFSv4 ACLs). RFC2324 posted:Its the windows file sharing protocol. If you aren't trying to work with a windows box, nfs is the standard, and there are plenty more options. Curiously enough, SMB isn't even a Microsoft thing - it was made by IBM for PC interoperability. RFC2324 posted:gfdi, why do I see poo poo refer to CIFS all the time? is it just because its reverse compat?
|
|
# ? Aug 25, 2022 08:36 |
|
BlankSystemDaemon posted:Like RFC2324 hinted, SMB isn't really dominant unless you only look at Windows, which accounts for a relatively small number of installations, and Windows Server integrates NFSv4(.1) support for interoperability with multi-OS environments I’m breaking up with the other guy. I want to marry you
|
# ? Aug 25, 2022 08:47 |
jaegerx posted:I’m breaking up with the other guy. I want to marry you Computer viking isn't wrong in what they said either, but NFS has changed a lot since even just v4, with v4.1 being ratified in 2010 and v4.2 being being worked on right now. Among other things. v4.2 includes sparse file support, space reservations (think quotas, but reversed), IO_ADVISE (ie. clients can tell that they expect to need), server-side copies, and application data holes (useful for doing hypervisor guest disks via NFS instead of iSCSI), and it's entirely possible that NFS over TLS will be added too. I've been testing NFS over TLS on FreeBSD (the only OS that has a working implementation of it) using my server and laptop, and perhaps the biggest strength is that with the removal of RPC from NFS along with locking being moved internal to the protocol (instead of happening on a side-band connection, usually to the lockd process), it's now entirely reasonable to do NFS over WAN on a single TCP port with a connection that's entirely encrypted (without having to use something like krb5i, which is rear end to setup). BlankSystemDaemon fucked around with this message at 10:24 on Aug 25, 2022 |
|
# ? Aug 25, 2022 10:18 |
|
jaegerx posted:I’m breaking up with the other guy. I want to marry you So your type is apparently grumpy BSD-using Scandinavian guys? Not a burgeoning market, but you do you. Also, I think you probably made the right choice there, he has a lot more experience writing readable but technical prose for public consumption than me.
|
# ? Aug 25, 2022 10:24 |
Computer viking posted:So your type is apparently grumpy BSD-using Scandinavian guys? Not a burgeoning market, but you do you. There's absolutely nothing wrong with your post, friend.
|
|
# ? Aug 25, 2022 10:25 |
|
BlankSystemDaemon posted:That certainly is a type! Thanks - it's ways hard to balance not going into too much detail while still not saying anything outright wrong. Doubly so when doing it from memory without taking the time to verify anything. Still, you do undeniably have more technical writing experience than me.
|
# ? Aug 25, 2022 10:30 |
|
BlankSystemDaemon posted:I've been testing NFS over TLS on FreeBSD (the only OS that has a working implementation of it) Oh, thank God. Your previous post on network filesystems had no references to BSD at all so I was afraid your account had been hacked by some New Jersey-type lowlife
|
# ? Aug 25, 2022 11:49 |
Computer viking posted:Thanks - it's ways hard to balance not going into too much detail while still not saying anything outright wrong. Doubly so when doing it from memory without taking the time to verify anything. That's something I learned from years of writing technical documentation in the various postions I've held, and but I think it ultimately has a lot to do with me liking documentation writing more than anything else. I do appreciate the compliment, but there's no reason to put yourself down all the same. NihilCredo posted:Oh, thank God. Your previous post on network filesystems had no references to BSD at all so I was afraid your account had been hacked by some New Jersey-type lowlife Or that's at least what I took from the redtext someone bought me. BlankSystemDaemon fucked around with this message at 12:02 on Aug 25, 2022 |
|
# ? Aug 25, 2022 12:00 |
|
BlankSystemDaemon posted:FreeBSD (the only OS that I talk about)
|
# ? Aug 25, 2022 12:04 |
|
Teasing aside your tech posts are good
|
# ? Aug 25, 2022 12:11 |
|
This is just a curiosity on my part, but when I switched my k3s node's underlying OS from Ubuntu to Alpine I went from the predictable slot-based network naming scheme to the old eth0/eth1/ethwhatever assignment. Which is also fine, but it was weird to me that it detected my 10gb mellanox as eth0 and my internal 10/100/1000 e1000e port as eth1. Lacking any additional knowledge, I would have assumed it would find the internal bus card first and give it eth0 and the add-on cards would be eth1+ which is obviously not the case here, but I could have sworn it WAS the case earlier. Is this a kernel toggle I can set somewhere, or is the detection order configured somewhere on modern Alpine that I can look at? I mean it's not a huge issue or anything, but since the e1000e is on-board it's unlikely to ever be removed or replaced, where I may or may not pull the 10gb card in the future and it would be easier if I didn't have to worry about the port that I *can't* remove suddenly changing from eth1 to eth0. If I can easily switch it in a config file somewhere then awesome. If it's involved and I have to worry about kernel params or configs surviving package upgrades or something then I'll just shrug and move on.
|
# ? Aug 25, 2022 12:51 |
|
The new interface names are part of udev. You could install it. If it is already installed it is a simple config change. At least once you figured out which of the 5ish config placements alpine used to deactivate persistent names.
|
# ? Aug 25, 2022 13:39 |
|
some kinda jackal posted:This is just a curiosity on my part, but when I switched my k3s node's underlying OS from Ubuntu to Alpine I went from the predictable slot-based network naming scheme to the old eth0/eth1/ethwhatever assignment. Which is also fine, but it was weird to me that it detected my 10gb mellanox as eth0 and my internal 10/100/1000 e1000e port as eth1. Alpine Linux philosophy is keep it simple which in practice means configure it yourself like it's 2005. Alpine uses mdev and nameif to do that. I don't think you have to worry about the config surviving upgrades or anything. And I don't think there's any other way to do it with kernel params or whatnot -- this debian wiki is a good history for why the predictable slot-based (or bus based) names was created. The names get assigned as the module loads, so the kernel is working on a very limited amount of information to decide what to name them.
|
# ? Aug 25, 2022 13:43 |
|
Really good details, thanks all. After some thought I’m not bothered enough by it, and honestly the easiest answer here is probably just to hardcode a dhcp lease for the e1000’s MAC so if it changes from eth1 to eth0 then idgaf I think. I’ve been trying to get away from statically addressing my lab so I can have more leeway with what becomes what, so I’ll just set both cards for dhcp and not worry about it.
|
# ? Aug 25, 2022 13:59 |
|
Alternately: disable the integrated ethernet in the BIOS, if it's not a board with IPMI or something else super-useful like that.
|
# ? Aug 25, 2022 14:09 |
|
some kinda jackal posted:Lacking any additional knowledge, I would have assumed it would find the internal bus card first and give it eth0 and the add-on cards would be eth1+ which is obviously not the case here, but I could have sworn it WAS the case earlier. Is this a kernel toggle I can set somewhere, or is the detection order configured somewhere on modern Alpine that I can look at? It's not perfect though, like, with the old style (that Alphine uses) you could pretty much guarantee a machine has an "eth0" of some variety, but machines won't necessarily have an "en0" and definitely "enp17s0f1" isn't guaranteed across different hardware either. Still, I've come to prefer predictable names in combination with systemd-networkd's ability to do glob matching on interface names. What I typically do now is have a network configuration that matches on "en*" and requests DHCP. So if I have an Ethernet cable plugged into any interface it attempts DHCP on them. Basically what NetworkManager does, but I'm not running NetworkManager. The problem with this is that if you need to run a system service that depends (and not in a failsafe way) on the network being up before starting, you'll have to have it wait on the network-online.target, and Ubuntu is currently broken if you pull in network-online and you specify that you simply need any interface up, not all interfaces. So maybe don't run Ubuntu?
|
# ? Aug 25, 2022 15:11 |
Wouldn't it be simpler to make it so that static or DHCP network configuration can be tied to the first interface that comes up? On the BSDs, ifconfig_DEFAULT="DHCP" in rc.conf(5) is enough.
|
|
# ? Aug 25, 2022 16:03 |
I'm having an issue where a specific flatpak program (parsec) is not running, but only if it gets called through steam. Running it by itself through an app launcher or from the desktop has no issues, and other flatpaks run fine when run through steam, its just this specific program thats having trouble. Are any logs made on the flatpak level that could tell me just what exactly is loving up here?
|
|
# ? Aug 25, 2022 16:40 |
|
BlankSystemDaemon posted:Wouldn't it be simpler to make it so that static or DHCP network configuration can be tied to the first interface that comes up? pre:[Match] Name=en* [Network] DHCP=ipv4
|
# ? Aug 25, 2022 17:13 |
|
some kinda jackal posted:Really good details, thanks all. After some thought I’m not bothered enough by it, and honestly the easiest answer here is probably just to hardcode a dhcp lease for the e1000’s MAC so if it changes from eth1 to eth0 then idgaf I think. I’ve been trying to get away from statically addressing my lab so I can have more leeway with what becomes what, so I’ll just set both cards for dhcp and not worry about it. eth0 is given to the first interface to register itself with the kernel. So the driver for your 10g nic is loading before or loading faster than the e1000 interface. Or at least it did the first time. I dunno if alpine linux tries to remeber past assigments and keep them consistent. Things happen in parallel these days so if there is nothing enforcing consistency between boots then maybe one day the e1000 driver will present its interface first and get eth0.
|
# ? Aug 26, 2022 00:59 |
ExcessBLarg! posted:I don't know that this is anymore difficult:
|
|
# ? Aug 26, 2022 09:02 |
|
If you only have one interface plugged in, then who cares? If you want to use all of them, then it would probably be better to use udev predictable naming
|
# ? Aug 26, 2022 09:54 |
BattleMaster posted:If you only have one interface plugged in, then who cares? Device wiring as described in pci(4) is for predictable names, and I'd be shocked if Linux couldn't do that before udev, and I'd be pretty shocked if anyone used it now. BlankSystemDaemon fucked around with this message at 10:23 on Aug 26, 2022 |
|
# ? Aug 26, 2022 10:20 |
|
https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames It says systemd/udev so I'm not sure which one or both does it but it's a thing Also nice BSD man page but this is for linux!!!
|
# ? Aug 26, 2022 10:22 |
|
|
# ? Jun 1, 2024 21:47 |
BattleMaster posted:https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames I was using the FreeBSD example because it's what I'm familiar with. Network device naming according to udev/System500 isn't especially sensible.
|
|
# ? Aug 26, 2022 10:25 |