Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Woolie Wool
Jun 2, 2006


Suckless seem to actively hate most human beings and I think part of plan 9's appeal to them is precisely because it is mostly useless in real life, and thus they can have it all to themselves.

VictualSquid posted:

I haven't heard about that in decades. It sounded like some nerds managed to trick a company into paying for their theoretical cs research project. And no recent info to change my mind.

I actually have worked at a place that used Solaris thin clients for most computing. And it seems to have no customer facing difference to the way plan 9 is described. In fact you probably could move the server onto some kybernetes or some other distributed architecture and have 99% of what plan 9 tries to offer to users and admins.
But then it's not "pure" or "elegant" because beautiful code is far more important than doing actual work or serving anyone who isn't part of the hacker elite. It's especially funny reading their hyper-libertarian political rants where they openly resent the existence of other people and see them as threats.

Woolie Wool fucked around with this message at 15:11 on Apr 11, 2023

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

Woolie Wool posted:

but what, exactly, is the sort of approach to computers that suckless people/Plan 9 idealists actually want for society?
Keep in mind a few things about Plan 9:
  • 9P is actually used in real products. It's used in WSL (Windows) and Crostini (Chrome OS), plus I'm sure a bunch of other stuff but these have actually move millions of units.
  • Plan 9 ideas have heavily influenced modern mixed-heritage OSes, Linux especailly. /proc, FUSE, etc., all come from Plan 9.
  • UTF-8 was designed by Ken Thompson and implemented in Plan 9 before being adopted by X/Open and the IETF.
  • The Plan 9 cluster architecture is a clear influence on modern microservices.
So it's development as a research OS has clear industry-wide reaching influences even if modern systems don't largely run "Plan 9".

Woolie Wool posted:

Do they want to lord over the internet as old-school '70s sysadmins writ large, who can mount the hard drives of the serfs' glorified terminals and read their personal information (seriously reading about plumbing, namespaces, 9P, etc. made me think of its :nsa: potential more than anything else)? Or do they just want computers to become completely irrelevant and most of the world go back to filing cabinets, rolodexes, and landline telephones while supporting a thousand wannabe Bell Labs to waste electricity on projects that revel in their own uselessness? Like, what do suckless people think computers are for?
Why do people still try to run 4.3 BSD on Vaxen, or Apple IIs for that matter? Plan 9 is an idealized OS and some people like to run it as a hobby. They may even be productive with it, but I doubt many use it exclusively and rely on some other system for, say, running a web browser.

I assume a lot of the attitude is intentional gatekeeping. These servers are open and just out there, but you have to be able to speak enough 9P to actually make use of them, and if you can do that then you're part of the club anyways. And while these things might be "interesting" there's not much monetization potential so it keeps away bad actors.

Woolie Wool posted:

E: Plan 9 in theory really sounds like the perfect :nsa: OS, a totalitarian regime with a hyper-elaborate version of plan 9's ideas (if they could actually work) could simply stop selling real computers altogether, and just let citizens have thin clients with minimal hardware that offload most of the processing to a giant nationwide blob of servers, mainframes, laptops, and virtual file systems that functions as one or a few giant systems where a hierarchy of administrators can pwn anyone's computers at any time just by changing directories or manipulating namespaces.
This is a bit complicated. Plan 9 came from a time when computer security was starting to get a bit more attention, but it still wasn't a primary concern. It was an era of the chaotic good. This hasn't been the case with the Internet since at least the early 00s.

That said, this is largely the modern computing model, albeit with a bit more security. Aside from everyone trying to move to cloud-based/subscription services, if you consider "social media" to be one of the biggest application platforms that is entirely cloud-based with everyone running thin clients (apps) with tons of personal and relationship data the spooks are very much interested in.

That said all my professional work is effectively cloud-based and I'm fine with that, since it's work I do to get paid and I don't really care who can see it as long as I get paid.

ExcessBLarg! fucked around with this message at 15:24 on Apr 11, 2023

Woolie Wool
Jun 2, 2006


I understand all of that but I'm talking about a community that is furious that the actual IRL implementation of plan 9 ideas are "impure" and that the original plan 9 system is the One True Way for all computing. To say "I don't like glibc" is to say something very different from "glibc is harmful".

Woolie Wool fucked around with this message at 16:20 on Apr 11, 2023

ExcessBLarg!
Sep 1, 2001
Loonies gonna loon.

BlankSystemDaemon
Mar 13, 2009



Plan9 might've had clustering, but it wasn't single-system image clustering with proper single progress space migration et al. - and those features are the only real reason to use clustering OS' over simply having regular fail-over like we do nowadays.

As far as suckless philosophy goes, remember that these are the folks that think that reading a config file from a disk is a feature that doesn't belong in a program, so they deliberately leave it out, and expect you to recompile every time you want to change a config option.

Kibner
Oct 21, 2008

Acguy Supremacy

BlankSystemDaemon posted:

As far as suckless philosophy goes, remember that these are the folks that think that reading a config file from a disk is a feature that doesn't belong in a program, so they deliberately leave it out, and expect you to recompile every time you want to change a config option.

As a software developer, this seems insane to me. Do they also count configs being stored in a SQL database also reading from a disk? I just don't understand the thought process that could lead to these beliefs.

BlankSystemDaemon
Mar 13, 2009



Kibner posted:

As a software developer, this seems insane to me. Do they also count configs being stored in a SQL database also reading from a disk? I just don't understand the thought process that could lead to these beliefs.
It's a loving mystery to me, but it's a pretty defining characteristic of their build instructions for things like dwm and st, that you have to edit config.h - it seems utterly user-hostile to me, a classic case of the developer going "You don't need that, because I know better".

tjones
May 13, 2005
Eh, most of the suckless philosphy is born from a fruitless effort of trying to combat useless bloat that seems to infect the majority of software these days. It's a coding best practice ted talk taken to the extremes. It is trying to implement the linux/unix idea of doing one thing well, but also with the absolute minimal amount of lines of code possible to do so. Unfortunately like most everything else linux, the nerds go too far overboard into insanity.

The no config thing makes sense if you look at it from the argument of only needing to compile dwm/st once and then never again. There is also the point that the codebase is so barebones in its stock form that it makes it easier for you to add whatever you want to the system, which would include code to read a config file if that is what you wanted.

I've touched my daily system's config file for my WM maybe twice in the decade or so since I've been using it. I've modified my terminal config file even less. Modifying a header file and recompiling the executable would net the same results with less overhead if I wanted to be anal-retentive about it.

cruft
Oct 25, 2007

Kibner posted:

As a software developer, this seems insane to me. Do they also count configs being stored in a SQL database also reading from a disk? I just don't understand the thought process that could lead to these beliefs.

I'm a code contributor to dwm, I sort of understand it.

They never come right out and say it, but what they're really doing is research. It's, like, the audiophile version of research, where they copy what someone more famous was doing without understanding the motivations, but it's still research. And it's hobbyist research, too. Maybe this is one of the few ways unfunded research is even possible, I don't know.

dwm in particular invented a lot of stuff that got copied by more successful projects like i3. The dwm people (really just Anselm) have set the bar to entry pretty high by requiring you recompile it and edit source code to change behavior. It means that anyone who managed to get it running automatically has a baseline understanding of things, so Anselm doesn't have to put energy into making it accessible to everyone.

And I can relate to this! I've put energy into making things accessible to everyone: it is a hell of a lot of work. Probably more work than the software itself.

I've seen various other "raising the bar to entry" things. I was complaining a couple days ago about a git project I sent a 3-line patch to, who had a whole manifesto you had to read, then download and install a bunch of local code formatters and checks, then agree to the license in a github comment, then pass a 10-step CI/CD pre-check. Finally, the author rejected my 3-line change because he thought of a 12-line kludge involving double shell escapes that could be added to the README.

Honestly, contributing to dwm was a whole lot less work, for a much larger change. They're total jerks about it, but at least they're up front about it. The thing I just failed to fix, the author was still a jerk, but it was dressed up as being egalitarian and fair to everyone.

cruft fucked around with this message at 17:28 on Apr 11, 2023

ExcessBLarg!
Sep 1, 2001

Kibner posted:

As a software developer, this seems insane to me. Do they also count configs being stored in a SQL database also reading from a disk? I just don't understand the thought process that could lead to these beliefs.
There's a project I maintain professionally that's similar, in the sense that configuration is done in code (or at least heavily using macros) and compiled into the binary. It works because the project is deliberately high-performance, low-footprint, critical path code. It's also infrequently touched, and when it is, often code-level (not just config) changes are needed as well. Also it's for internal consumption only, not a product intended for use by end users.

Again, if it's something where anyone looking at the code is probably already intimately familiar with it, then modifying config.h and having some static checking applied at compile time may well be "better" than editing config.xml and hoping that you didn't violate schema and that the 15 layers of reflection involved in process startup doesn't barf on you at runtime.

Klyith
Aug 3, 2007

GBS Pledge Week

cruft posted:

They never come right out and say it, but what they're really doing is research. It's, like, the audiophile version of research, where they copy what someone more famous was doing without understanding the motivations, but it's still research. And it's hobbyist research, too. Maybe this is one of the few ways unfunded research is even possible, I don't know.

I think you could go way more metaphorical.

Suckless are the monks living in a simple, bare temple on top of a mountain. The sit around and meditate about pure C with minimal libraries, or the virtues of making a complete WM in under 5000 lines of code. They don't do a whole lot else. But they have a philosophy based on an important truth, which is that smaller and simpler code is generally better than big and complicated code.


It may seem really pretentious, especially when the monks open their front gate and start making pronouncements about SystemD violating the principles of good code. "Small and simple! Do one thing and do it well!" they yell. "Don't use that combine harvester, the enlightened farmer can harvest a field with just this hoe!" It's really easy to make a jerk-off motion when they do that, especially when the monks themselves are willing to use X11 and the linux kernel and an old tractor they keep in a shed.

But the monks get respect from some coders. Because they do have a truth; not the only truth, but good one. Sometimes a coder spends a week meditating with the monks and learns something. Probably they go back to making elaborate software that normal humans can use, only a bit better.


The final stage of enlightenment in Zen Buddhism is the return to society. If you have managed to become enlightened, you don't sit alone in a temple. You use your wisdom to benefit people around you. To the enlightened, SystemD is perfectly at harmony with suckless principles. It does one thing -- manages the OS -- and does it well. Because sometimes what the world needs is a goddamn combine harvester.

BlankSystemDaemon
Mar 13, 2009



tjones posted:

Eh, most of the suckless philosphy is born from a fruitless effort of trying to combat useless bloat that seems to infect the majority of software these days. It's a coding best practice ted talk taken to the extremes. It is trying to implement the linux/unix idea of doing one thing well, but also with the absolute minimal amount of lines of code possible to do so. Unfortunately like most everything else linux, the nerds go too far overboard into insanity.

The no config thing makes sense if you look at it from the argument of only needing to compile dwm/st once and then never again. There is also the point that the codebase is so barebones in its stock form that it makes it easier for you to add whatever you want to the system, which would include code to read a config file if that is what you wanted.

I've touched my daily system's config file for my WM maybe twice in the decade or so since I've been using it. I've modified my terminal config file even less. Modifying a header file and recompiling the executable would net the same results with less overhead if I wanted to be anal-retentive about it.
I'm not in the habit of touching .config/sway/config every day now, but when I was still learning it, it was a regular thing - and it does, on occasion, happen that I find something I want to integrate into it.

The logic of suckless, taken to an extreme (which may be a bit of a strawman, admittedly), is that I would then have to compile an update, discover something new, recompile again, and potentially even rinse and repeat that cycle?

That seems like too much.

Klyith posted:

The final stage of enlightenment in Zen Buddhism is the return to society. If you have managed to become enlightened, you don't sit alone in a temple. You use your wisdom to benefit people around you. To the enlightened, SystemD is perfectly at harmony with suckless principles. It does one thing -- manages the OS -- and does it well. Because sometimes what the world needs is a goddamn combine harvester.
That's the thing though, System500 isn't the end-all and be-all of service management - because if you want to do that, you have to go as far as Microsoft and Sun did.

Sun invented the fault management framework and Solaris Fault Manager in order to properly handle all the nonsense that userspace can get up to in regard to failing in absolutely bonkers ways; Microsoft has something similar in place (though it's less well-documented, and I don't remember too much about it).

Also, SMF works. :v:

BlankSystemDaemon fucked around with this message at 21:17 on Apr 11, 2023

BattleMaster
Aug 14, 2000

I love systemd. I don't think distros or most software should depend on it, because choice is good, but it really is great at what it does.

I like having consistency between all services and timers instead of a big pile of scripts and cron jobs with no central place to report status or failure (but cron can e-mail logs to you lmao at that being the primary way in 2023). And still all configured with text files in /etc, same as anything else. And it gives you a lot of control over the sequencing of how things start. When you make the service config you tell it what needs to happen before it and what needs to happen after it and during startup it figures out how to do things as parallel as possible while sticking to that order.

Logging is also just text files in /var but with help for easily checking them for things related to whatever service or whatever timer. I type systemctl status servicename and it shows me if the thing is running or not, how much memory it's using, how much CPU time it used, what its PID is, and the last few lines that the service logged. How do you write a program to log stuff with systemd? You don't, you just have it write to stdout and stderr as usual and that stuff gets logged. If you don't want that, you can tell it in the service config file where you want those streams to go.

It has a lot of components, but they are only even used if you do something that uses them. If you're weirded out by systemd having support for DNS and HTTP, don't do anything that uses those and it doesn't matter.

Like I wouldn't say it makes Linux super fun to use but it makes things as little of a pain in the rear end as possible without compromising on features.

ExcessBLarg!
Sep 1, 2001

BlankSystemDaemon posted:

The logic of suckless, taken to an extreme (which may be a bit of a strawman, admittedly), is that I would then have to compile an update, discover something new, recompile again, and potentially even rinse and repeat that cycle?
I'm sure the entirety of Plan 9 can be compiled in under a minute on modern machines and incremental compilation is probably nanoseconds.

PHK's Varnish Cache comes to mind too. Its configuration is a DSL that's translated to C, compiled, and dlopened as part of starting the daemon. It doesn't take long.

BlankSystemDaemon
Mar 13, 2009



ExcessBLarg! posted:

I'm sure the entirety of Plan 9 can be compiled in under a minute on modern machines and incremental compilation is probably nanoseconds.

PHK's Varnish Cache comes to mind too. Its configuration is a DSL that's translated to C, compiled, and dlopened as part of starting the daemon. It doesn't take long.
A nanosecond is a fairly short amount of time, so that might be a bit of an overstatement - especially considering filesystems can't keep track of changes that close together, without modification.
On FreeBSD, it's handled by vfs_timestamp(9) - I don't know what handles it on Linux, but I'm curious if it has something like the before-mentioned facility.

Varnish is much closer to how JIT-compiled BPF (not eBPF) machine code in ipfw works - and I'm pretty sure phk mentioned it as being an inspiration for VCL at a Danish hacker conference several years back.

CaptainSarcastic
Jul 6, 2013



cum jabbar posted:

How have I gone all these years without KDE Connect? I can transfer files effortlessly now. So simple to set up. Love this avahi stuff.

KDE Connect is the most elegant solution I've found for a lot of things, including connecting my Android phones to Windows installs. The fact it is in the Microsoft Store is weird, but handy. I mean, it's better on a Linux desktop, but is perfectly usable on Windows, too.

Mr. Crow
May 22, 2008

Snap City mayor for life

CaptainSarcastic posted:

KDE Connect is the most elegant solution I've found for a lot of things, including connecting my Android phones to Windows installs. The fact it is in the Microsoft Store is weird, but handy. I mean, it's better on a Linux desktop, but is perfectly usable on Windows, too.

I just started using kde connect, is there a way to share the desktop on the phone or something? My use case was "steam big picture on the couch, game crashes and throws pop up, close popup and get back to big picture without leaving couch" but it kinda fails with multi monitor, can find cursor etc.

CaptainSarcastic
Jul 6, 2013



Mr. Crow posted:

I just started using kde connect, is there a way to share the desktop on the phone or something? My use case was "steam big picture on the couch, game crashes and throws pop up, close popup and get back to big picture without leaving couch" but it kinda fails with multi monitor, can find cursor etc.

You can use KDE connect as a remote keyboard/mouse, but it doesn't have any screensharing capabilities. The remote input device functionality might work for your needs, though.

I would blow Dane Cook
Dec 26, 2008
They got up to some weird poo poo at Bell Labs, like inventing the transistor or the time they took parts of the roof off to refurbish the statue of liberty.

The Atomic Man-Boy
Jul 23, 2007

I'm working with a program in Lutris that is looking for a file which is located in my regular directory.

So its looking for something like C:\users\some_guy\Documents\

But that is obviously in Wine, I want to create a symlink in Wine so it looks in \home\some_guy\Documents

How do I do that?

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:
Speaking of how fast modern machines are... Why do some things still take forever, like establishing a wifi connection, or Bluetooth scanning, or setting up an email client that verifies SMTP? Is there that much computation going on, or is it some overly generous timeouts happening at some low level? I have an email client that takes about 2 minutes to fail an SMTP check (because I fat-fingered the domain). I'd think it should be able to tell there's nothing on the other end pretty quickly.

CaptainSarcastic
Jul 6, 2013



Tad Naff posted:

some overly generous timeouts happening at some low level

I think that's a significant portion of it. Some other stuff is just set up in what seems like a very clunky fashion. When I worked tech support for an ISP the number of times I would sit and wait for a whole DHCP handshake to complete was kind of painful.

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:

CaptainSarcastic posted:

I think that's a significant portion of it. Some other stuff is just set up in what seems like a very clunky fashion. When I worked tech support for an ISP the number of times I would sit and wait for a whole DHCP handshake to complete was kind of painful.

Ha, that's what motivated my question, sort of. I've been working with tech support to get a replacement PVR activated. The waiting for various things to reboot and connect is off the chain.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

The Atomic Man-Boy posted:

I'm working with a program in Lutris that is looking for a file which is located in my regular directory.

So its looking for something like C:\users\some_guy\Documents\

But that is obviously in Wine, I want to create a symlink in Wine so it looks in \home\some_guy\Documents

How do I do that?

you want (winedir)/drive_c/users/some_guy/Documents to be a soft symlink to /home/some_guy/Documents. It should work, if it doesn't there should be some obscure wine setting that deactivates the functionality.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

BattleMaster posted:

I love systemd. I don't think distros or most software should depend on it, because choice is good, but it really is great at what it does.

I like having consistency between all services and timers instead of a big pile of scripts and cron jobs with no central place to report status or failure (but cron can e-mail logs to you lmao at that being the primary way in 2023). And still all configured with text files in /etc, same as anything else. And it gives you a lot of control over the sequencing of how things start. When you make the service config you tell it what needs to happen before it and what needs to happen after it and during startup it figures out how to do things as parallel as possible while sticking to that order.

Logging is also just text files in /var but with help for easily checking them for things related to whatever service or whatever timer. I type systemctl status servicename and it shows me if the thing is running or not, how much memory it's using, how much CPU time it used, what its PID is, and the last few lines that the service logged. How do you write a program to log stuff with systemd? You don't, you just have it write to stdout and stderr as usual and that stuff gets logged. If you don't want that, you can tell it in the service config file where you want those streams to go.

It has a lot of components, but they are only even used if you do something that uses them. If you're weirded out by systemd having support for DNS and HTTP, don't do anything that uses those and it doesn't matter.

Like I wouldn't say it makes Linux super fun to use but it makes things as little of a pain in the rear end as possible without compromising on features.

Does systemd provide a mechanism to check for failed timers, better then e-mail?
I suppose I can run the command to list failed units, but I feel like that goes into the wrong direction.
Also while on the topic, does anybody else use systemd-cron? I am getting annoyed at its insistence to write malformated emails for cron.d failures. The baffling thing is that the emails work correctly for the timers I created with the crontab feature.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



VictualSquid posted:

Does systemd provide a mechanism to check for failed timers, better then e-mail?
I suppose I can run the command to list failed units, but I feel like that goes into the wrong direction.
Also while on the topic, does anybody else use systemd-cron? I am getting annoyed at its insistence to write malformated emails for cron.d failures. The baffling thing is that the emails work correctly for the timers I created with the crontab feature.

https://github.com/joonty/systemd_mon

There is also the attribute “on failure” which you can add to specific services which can be used to trigger a notification if you only want a small number of services to watch for failure

https://northernlightlabs.se/2014-07-05/systemd-status-mail-on-unit-failure.html

Nitrousoxide fucked around with this message at 09:53 on Apr 12, 2023

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
I can recommend Pushover as a notification method to phones. It's a one off $5 buy, but it's easy to script mrssage sending and it's easy more reliable than email.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
Is there a simple way to check programmatically if a failed systemd unit exists on the system? systemctl --failed returns the list, but I can't find a way to make it return an informative return code. I could wrap it into grep or something, but I feel like I shouldn't need to.

Also, re: Pushover , my problem is that the program I want to check insists on writing to sendmail. Using some formatting that causes it to be rejected regularly but not always by the server I forward it to using msmtp.
Is there a sendmail replacement dropin for Pushover?

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
No idea if it's any good

https://github.com/YoRyan/smtp-translator

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
So, I found systemctl is-failed "*" -q which returns 0 if there are failed units on my system, I think. The output is utterly useless, which is why I set it to quiet.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



VictualSquid posted:

Is there a simple way to check programmatically if a failed systemd unit exists on the system? systemctl --failed returns the list, but I can't find a way to make it return an informative return code. I could wrap it into grep or something, but I feel like I shouldn't need to.

Also, re: Pushover , my problem is that the program I want to check insists on writing to sendmail. Using some formatting that causes it to be rejected regularly but not always by the server I forward it to using msmtp.
Is there a sendmail replacement dropin for Pushover?

Ansible maybe? You could pass it along a list of services you want it to investigate and it could report back if they are started, and you could also set it up to start them if they are inactive.

https://docs.ansible.com/ansible/latest/collections/ansible/builtin/systemd_module.html

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Nitrousoxide posted:

Ansible maybe? You could pass it along a list of services you want it to investigate and it could report back if they are started, and you could also set it up to start them if they are inactive.

https://docs.ansible.com/ansible/latest/collections/ansible/builtin/systemd_module.html

I really want to avoid listing every unit I want to check manually, or even semi-manually.
I ended up with two separate monit checks.
One which exposes to list of failed units to the web interface, and a separate one which sends an alert if any unit is failed.

BlankSystemDaemon
Mar 13, 2009



VictualSquid posted:

returns 0 if there are failed units
It does WHAT?!

xzzy
Mar 5, 2009

Given the language of the command returning zero makes perfect sense.

BlankSystemDaemon
Mar 13, 2009



xzzy posted:

Given the language of the command returning zero makes perfect sense.
"Test successfully failed" is meant to be a joke, not something you put into production.

xzzy
Mar 5, 2009

But the test didn't fail. The command checks if any units are failed. If units are failed, it reports success because the command was successful at finding failures.

Klyith
Aug 3, 2007

GBS Pledge Week

Tad Naff posted:

Speaking of how fast modern machines are... Why do some things still take forever, like establishing a wifi connection, or Bluetooth scanning, or setting up an email client that verifies SMTP? Is there that much computation going on, or is it some overly generous timeouts happening at some low level? I have an email client that takes about 2 minutes to fail an SMTP check (because I fat-fingered the domain). I'd think it should be able to tell there's nothing on the other end pretty quickly.

Wifi and bluetooth have a lot of delay while they scan for SSID broadcasts / BT beacons. That means hopping through every channel in the spectrum and waiting for some amount of time looking for broadcast packets.

If you have a machine that only ever connects to one wifi network, it might connect faster by turning on the option that you use to join non-broadcasting wifi networks. I think logically that would skip waiting for the SSID interval and just immediately connect... but that would also be up to the OS / software to make that call, so who knows!


And then you get into network congestion. I've never thought that wifi took forever to connect, when I wake up a machine with wifi it's generally connected before I'm done logging it. But I don't live in an apartment building or otherwise have terrible wifi quality.

ExcessBLarg!
Sep 1, 2001

Tad Naff posted:

Why do some things still take forever,
The general answer to this question is that after 40 years of UI systems engineering, feature creep still accumulates to the point where loading times are unbearable and then there's a concentrated effort to fix it. Short of a dramatic technology change (for example, moving from magnetic disks to SSDs) you're not going to get those gains for free. Some systems are designed for efficiency from the ground up, but most are designed for marketable features.

Tad Naff posted:

like establishing a wifi connection, or Bluetooth scanning,
There's a few things here: Timeouts are defined by protocol, and these protocols are nearing 30 years old. WiFi and Bluetooth specifically prioritize backwards compatibility, which means devices have to speak the earliest/most-primitive versions of the protocols and negotiate an upgrade in the communications channels to the latest versions. Also, when it comes to RF, you still have fundamental issues like SNR and propagation characteristics meaning that you can only do "so much" to improve behavior in non-ideal circumstances, and the bands are only getting more congested over time.

Tad Naff posted:

or setting up an email client that verifies SMTP?
Baseline Internet protocols (IP, DNS, even SMTP) are 40+ years old now. Everything modern is built on top of these earliest protocols, but they'll still inherit fundamental characteristics. Most modern stuff is built on top of some form of HTTP(S), though email is notably not one of those things. This is why Google was pushing for HTTP/2 (SPDY) a while back because it was a fundamental change to the underlying protocol to improve efficiency.

If you fat-finger a domain, you're doing a bunch of DNS requests to figure out if the host is valid. If the domain didn't exist at all you'd probably get a response quickly, but now between wildcard DNS and domain squatters there probably is valid entries for the domain, but maybe not the host, and the listed nameserver is in who-knows what state of responsiveness. These protocols still rely on fundamentally "long" timeouts because RTTs can only improve so much and there might be some old/slow router in the way, and it's just part of the protocol.

Timeouts are almost always a guess and rarely optimized. Maybe that's the fundamental lesson here.

Computer viking
May 30, 2011
Now with less breakage.

Making the user wait 30 seconds once if they mistype is probably preferable to sporadically failing on valid but slow domains. I'd love a "Give Up" button, though.

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week

Computer viking posted:

I'd love a "Give Up" button, though.

:f5:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply