Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
RFC2324
Jun 7, 2012

http 418

VostokProgram posted:

Fedora doesn't use yum anymore, it's dnf now

still better than a GUI updater

I swear by YaST, but I still avoid its drat GUI like the plague

GUIs are for web browsing and playing video games, actual things that matter should be done via the CLI


speaking of CLI, are there any CLI based discord clients that are still developed? all the ones I could find don't work anymore because the devs got afraid of being banned from discord

Adbot
ADBOT LOVES YOU

Yaoi Gagarin
Feb 20, 2014

RFC2324 posted:

still better than a GUI updater

I swear by YaST, but I still avoid its drat GUI like the plague

GUIs are for web browsing and playing video games, actual things that matter should be done via the CLI


speaking of CLI, are there any CLI based discord clients that are still developed? all the ones I could find don't work anymore because the devs got afraid of being banned from discord

Oh yeah for sure, I never use any GUI update tools. They all suck lol

Computer viking
May 30, 2011
Now with less breakage.

ExcessBLarg! posted:

I don't mean the UIs, I mean the approach to updates, automatic updates, 0-day handling, etc. I don't know if Windows 10 is any better about it, but my wife's Windows 8(ish) laptop from 2015 still has the "oh you want to shutdown? gotta spend 30 minutes installing updates first!" behavior.

That's absolutely still a thing, though at least it feels like the actual installation has gotten faster - though of course that might just be hardware improvements.

I did also fix a Windows 10 printer issue today ("printing from 'modern' apps doesn't work") by finding and copying a DLL from somewhere in the specific printer driver into the spooler/x64/ folder. Very 1999 - and about the same amount of work and research as finding out how the hell you get the Citrix client on Fedora to import a root certificate. Not a company-specific root certificate, mind you - but Digicert's most commonly used one.

Which is the long way to say that all modern OSes suck, but you can learn to live with most of them.

RFC2324
Jun 7, 2012

http 418

Computer viking posted:

That's absolutely still a thing, though at least it feels like the actual installation has gotten faster - though of course that might just be hardware improvements.

I did also fix a Windows 10 printer issue today ("printing from 'modern' apps doesn't work") by finding and copying a DLL from somewhere in the specific printer driver into the spooler/x64/ folder. Very 1999 - and about the same amount of work and research as finding out how the hell you get the Citrix client on Fedora to import a root certificate. Not a company-specific root certificate, mind you - but Digicert's most commonly used one.

Which is the long way to say that all modern OSes suck, but you can learn to live with most of them.

gonna say I'm glad they moved alot the updates to shutdown instead of leaving them all on startup the way they used to be

the linux way of doing it live remains superior

Insurrectionist
May 21, 2007
I'm running Fedora on a mildly scuffed laptop. The issue with this laptop is that its wifi adapter will randomly not launch properly, which is a known issue and sending it in for repairs did poo poo. I have previously used it for months without ever running into a problem, and I've also had days where it refuses to work. To 'solve' this, I just bought a Netgear A6100 USB Wifi adapter that I can plug in and use.

I installed Fedora dual-boot in August, and once the same wifi-issues started propping up a couple weeks later I installed the USB driver and everything seemed alright. Most of the time the actual laptop adapter will work, and when it refused to the Netgear would take over nicely. However, with update 5.14.9-200, this is borked. After installing this, my laptop adapter for some reason has not worked even once, and the Netgear adapter refuses to fire - I can still find it with lsusb, but I assume I need to reinstall drivers or something.

This is made more annoying by the fact that this laptop has a weird network cable port where none of my cables nor any I could find at the local hardware stores will actually fit, so I can't use that to get online and try to redownload and install Netgear drivers that way.

I can boot on the previous version and both work fine (well, 90% of the time only for the regular adapter, but oh well), and that was my solution for a week or so. But lately I've been running into a few issues with this version including random freezes, apps not starting, etc that is making it very annoying to use. Any ideas for fixing this up? I don't have a lot of experience actually using Linux as my normal OS, as opposed to just firing it up in a VM occasionally where this kind of stuff wasn't really an issue. A distressing number of google results for the various issues I've had with this along the way have been replies from within the past year saying 'this is a known issue we hope to have it fixed soon' which has not inspired confidence.

E: Fedora 34 btw

Insurrectionist fucked around with this message at 09:14 on Oct 12, 2021

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
I believe that the easiest way to get temporary networking in such a situation is to use the tethering mode on you cellphone to use that as an usb-wifi adapter. This has worked out of the box for all desktop linuxes I have tried it on.
From there you can reinstall the drivers as a first step.

You could also use lspci -k to check if it lists a driver and which. If the driver is listed there the problem should be somewhere else. But, reinstall and reboot anyways to be sure.

Then check your syslog for error messages related to networking as the next step.

Insurrectionist
May 21, 2007
Thanks for the ideas, lspci -k shows the Netgear adapter drivers installed, so I guess that's not the issue. Which is itself annoying since it was my only idea. I guess I should try tethering and reinstalling them.

I took a look at systemctl logs, but I couldn't find anything much there. I was wondering if the issue was network manager, but if I run nmcli device it doesn't find any wifi devices at all (not even unavailable, just not listed), and nmcli dev wifi list is empty.

Truga
May 4, 2014
Lipstick Apathy

RFC2324 posted:

speaking of CLI, are there any CLI based discord clients that are still developed? all the ones I could find don't work anymore because the devs got afraid of being banned from discord

https://github.com/terminal-discord/weechat-discord

i used this. it worked fine for a few years but then i broke it and cba fixing it because i don't use it much anymore

weechat is more or less just newer irssi

BlankSystemDaemon
Mar 13, 2009




Truga posted:

weechat is more or less just newer irssi
Newer in what way? WeeChat and irssi are both actively maintained with regular patches.
Feature-wise, they're pretty much comparable - except that irssi supports FreeBSDs capsicum sandboxing framework.

Armauk
Jun 23, 2021


Truga posted:

weechat is more or less just newer irssi
I've had some random issues with weechat that's given me a ton of headaches lately. The problems have sat dormant as GH Issues for years, and the devs, unfortunately, don't intend to fix them. That made me switch to irssi, and I haven't had any issues since.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Insurrectionist posted:

This is made more annoying by the fact that this laptop has a weird network cable port where none of my cables nor any I could find at the local hardware stores will actually fit, so I can't use that to get online and try to redownload and install Netgear drivers that way.

What's the model of the laptop? The network port may be broken or it may be one of those few hinged ports, since there is only a single standard for network ports.

RFC2324
Jun 7, 2012

http 418

Saukkis posted:

What's the model of the laptop? The network port may be broken or it may be one of those few hinged ports, since there is only a single standard for network ports.

I was thinking someone didn't know what a modem port looks like

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

RFC2324 posted:

I was thinking someone didn't know what a modem port looks like

I considered that, but modem ports are quite rare nowadays and there's usually network port next to it.

DaveSauce
Feb 15, 2004

Oh, how awkward.
I have some questions on VNC/screen sharing in Debian 11.

I'm trying to get an old laptop set up as a server for Home Assistant. Before locking it away in a closet that I'll have to dig it out of to make changes, I want to be able to access it from my windows machine. Currently using Ultra VNC Viewer on that side with default settings.

In Debian, I go to Settings and enable "Screen Sharing" and it works... except I only get 1/4 of the screen. Google says this is a resolution issue with the bundled VNC server, but no matter what resolution I set the laptop to it does the same thing.

So I tried to set up TigerVNC, and I'm having issues. I'm following the instructions in these two links:

https://tecadmin.net/how-to-install-vnc-server-on-debian-10/
https://computingforgeeks.com/install-and-configure-tigervnc-vnc-server-on-debian/

Which are practically identical. The issue comes in the step where I modify the ~/.vnc/xstartup file. They both recommend this:

code:
#!/bin/sh
xrdb $HOME/.Xresources
vncconfig -iconic &
dbus-launch --exit-with-session gnome-session &
So doing this, when I run vncserver via terminal it faults out saying it can't find ~/.Xresources. So I touch the file (it's blank), and then after that every time I run vncserver it just hangs up. Doesn't feed back anything to terminal, doesn't return to prompt, just hangs there. So I guess I'm not sure if there's an issue with the ~/.Xresources file, or with the ~/.vnc/xstartup file.

So some caveats:

I have literally no idea what any of that means, and I haven't used Linux for a very long time, so I'm sure there's a million things I'm missing and I don't know much of what I'm doing. No real clue what other things to do to troubleshoot.

So any idea what's up? Googling seems to indicate that the xstartup file is nowhere near correct... every other site I find has a wildly different setup. But they are for different distros, so I don't know if this is distro dependent or what.

DaveSauce fucked around with this message at 22:06 on Oct 12, 2021

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
Instead of vnc, you could try rdp. I've don't remember ever having to do much to get xrdp installed working.

DaveSauce
Feb 15, 2004

Oh, how awkward.

Pablo Bluth posted:

Instead of vnc, you could try rdp. I've don't remember ever having to do much to get xrdp installed working.

If that's easier then I'll give it a shot, thanks! For some reason the guides I read made it seem like rdp was more difficult or less likely to work than VNC. I may have had already made up my mind since I use vnc a lot at work, though.

Voodoo Cafe
Jul 19, 2004
"You got, uhh, Holden Caulfield in there, man?"

DaveSauce posted:

I have some questions on VNC/screen sharing in Debian 11.

I'm trying to get an old laptop set up as a server for Home Assistant. Before locking it away in a closet that I'll have to dig it out of to make changes, I want to be able to access it from my windows machine. Currently using Ultra VNC Viewer on that side with default settings.

In Debian, I go to Settings and enable "Screen Sharing" and it works... except I only get 1/4 of the screen. Google says this is a resolution issue with the bundled VNC server, but no matter what resolution I set the laptop to it does the same thing.

So I tried to set up TigerVNC, and I'm having issues. I'm following the instructions in these two links:

https://tecadmin.net/how-to-install-vnc-server-on-debian-10/
https://computingforgeeks.com/install-and-configure-tigervnc-vnc-server-on-debian/

Which are practically identical. The issue comes in the step where I modify the ~/.vnc/xstartup file. They both recommend this:

code:
#!/bin/sh
xrdb $HOME/.Xresources
vncconfig -iconic &
dbus-launch --exit-with-session gnome-session &
So doing this, when I run vncserver via terminal it faults out saying it can't find ~/.Xresources. So I touch the file (it's blank), and then after that every time I run vncserver it just hangs up. Doesn't feed back anything to terminal, doesn't return to prompt, just hangs there. So I guess I'm not sure if there's an issue with the ~/.Xresources file, or with the ~/.vnc/xstartup file.

So some caveats:

I have literally no idea what any of that means, and I haven't used Linux for a very long time, so I'm sure there's a million things I'm missing and I don't know much of what I'm doing. No real clue what other things to do to troubleshoot.

So any idea what's up? Googling seems to indicate that the xstartup file is nowhere near correct... every other site I find has a wildly different setup. But they are for different distros, so I don't know if this is distro dependent or what.

I would strongly recommend NoMachine for this over VNC, it's much snappier and you can do USB pass through, etc. As well, desktops requiring compositing like GNOME usually don't play at all well with VNC.

If you're dead set on VNC, most Linux VNC server create a separate virtual $DISPLAY for each user when you connect to them, usually at the default resolution which is 640x480. This is likely why your screen is so small. But the vast majority of us aren't going to be having multiple people connect to VNC, so this is mostly a waste. If you want your VNC to mirror the builtin display rather than creating a virtual one, forget TigerVNC and use x11vnc instead, i find it much less of a pain.

BlankSystemDaemon
Mar 13, 2009




RDP is faster than VNC, but SPICE is even faster - it's typically used for virtualization for the hypervisors that support it, but Xspice lets you hook it to any X session.

spiritual bypass
Feb 19, 2008

Grimey Drawer

Voodoo Cafe posted:

I would strongly recommend NoMachine for this over VNC, it's much snappier and you can do USB pass through, etc. As well, desktops requiring compositing like GNOME usually don't play at all well with VNC.

If you're dead set on VNC, most Linux VNC server create a separate virtual $DISPLAY for each user when you connect to them, usually at the default resolution which is 640x480. This is likely why your screen is so small. But the vast majority of us aren't going to be having multiple people connect to VNC, so this is mostly a waste. If you want your VNC to mirror the builtin display rather than creating a virtual one, forget TigerVNC and use x11vnc instead, i find it much less of a pain.

The contemporary open source version of NoMachine is called X2go now. As long as you have a working ssh server and a graphical desktop, X2go works really well. It's as fast as RDP.

BlankSystemDaemon
Mar 13, 2009




What's the difference between X2Go and ssh -CX?
One of the major issues with X is the synchronous round-trip model it's built on, and so far as I can tell, it doesn't even attempt to do anything about that.

Considering the server software is Linux-only, it seems a strictly inferior solution.

spiritual bypass
Feb 19, 2008

Grimey Drawer
ssh X forwarding sucks complete rear end in my experience while x2go does not; I don't know the details

ExcessBLarg!
Sep 1, 2001

BlankSystemDaemon posted:

What's the difference between X2Go and ssh -CX?
Back when I used NX (NoMachine) for a bit, the main difference is that it aggressively caches the X11 pixmap data so that modern GUI applications are actually responsive even over a WAN connection. From a user perspective the main difference is that NX sessions are persistent.

BlankSystemDaemon posted:

One of the major issues with X is the synchronous round-trip model it's built on, and so far as I can tell, it doesn't even attempt to do anything about that.
That's exactly what it addresses.

BlankSystemDaemon posted:

Considering the server software is Linux-only, it seems a strictly inferior solution.
NX as FreeNX was a giant pain in the rear end to configure and keep up to date when I used it a decade ago. If you only care about Linux servers I suppose it's fine, but I don't know if X2Go is better integrated with distributions or not.

ExcessBLarg! fucked around with this message at 14:58 on Oct 13, 2021

BlankSystemDaemon
Mar 13, 2009




ExcessBLarg! posted:

Back when I used NX (NoMachine) for a bit, the main difference is that it aggressively caches the X11 pixmap data so that modern GUI applications are actually responsive even over a WAN connection. From a user perspective the main difference is that NX sessions are persistent.
What does persistence mean in this context?

ExcessBLarg! posted:

That's exactly what it addresses.
How? I'm not finding any documentation saying it does, nor the technical details.
Unless they're modifying Xlib, which it doesn't look like to me, I don't see how they can do it.

ExcessBLarg! posted:

NX as FreeNX was a giant pain in the rear end to configure and keep up to date when I used it a decade ago. If you only care about Linux servers I suppose it's fine, but I don't know if X2Go is better integrated with distributions or not.
I'm a FreeBSD user, so yes I do care that software isn't Linux-only.

More importantly though, I don't understand why Linux users don't - there's no downside to ensuring portability, and as a practical upside portability goes a long way towards avoiding entire classes of bugs.
Moreover, what happens if a (group of) corporation(s) gets control of the Linux kernel? This isn't nearly as unlikely as a lot of people seem to think, since the Linux Foundation is already just a business association with no requirement that they produce anything for the public good.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
Does anybody know a zfs snapshot manager that uses a pruning philosophy similar to borgbackup's? Where all snapshots are equal when taken, and a lucky one gets promoted to be the daily/yearly one once it gets old.

Or some advice on my plans, full of pitfalls?
I am currently reconceptualizing by home storage. I got an rpi with nextcould, calibre and so on, and I currently sync some data to it with rsync.
I want to put in a large drive, so it actually mirrors all of my main computer's data. The main pc has the data on drives that I am currently migrating from btrfs to zfs.
I am still undecided between using glusterfs replication or using some zfs-send thing (sanoid/syncoid) . I have currently no application that would need syncing from the pi to the pc, but that might change. On the other hand gluster would have some overhead.
Anybody got experience with backing up glusterfs volumes? As I understand it I could just use zfs snapshot on one of the underlying drives, but I am not sure how if there isn't crazyness ahead.

Also, what is the best way to back up my ext4 root drive to the zfs system? I currently use borg, but it should be possible to rsync the files over and have the snapshot demon do the thing.

ExcessBLarg!
Sep 1, 2001

BlankSystemDaemon posted:

What does persistence mean in this context?
It's like VNC/RDP, you login to a desktop and when you disconnect from the session the desktop (and applications) remain running on the server. When you connect to the session again the applications appear again. This isn't suspending either, as the applications will reflect their most recently updated state.

BlankSystemDaemon posted:

How? I'm not finding any documentation saying it does, nor the technical details.
Unless they're modifying Xlib, which it doesn't look like to me, I don't see how they can do it.
See this. Instead of direct tunneling the X11 protocol it's basically a chain of nested/proxied X11 instances.

BlankSystemDaemon posted:

I'm a FreeBSD user, so yes I do care that software isn't Linux-only.
If.

Anyways, my main issue with NX years ago is that the core technology was open source but it was clearly a commercial effort, which meant there was no real effort to integrate it back into the community. If you were OK running the proprietary version that was fine as long as it worked on your platform, but if it didn't it was a giant pain in the rear end. Not supporting FreeBSD (which, FreeNX might actually supported it, I can't recall) is part of that giant pain in the rear end.

BlankSystemDaemon posted:

More importantly though, I don't understand why Linux users don't - there's no downside to ensuring portability, and as a practical upside portability goes a long way towards avoiding entire classes of bugs.
You're preaching to the choir here. I personally care about Linux servers because that's all I have to deal with, but I also care that software is generally portable for reasons you mention.

ExcessBLarg!
Sep 1, 2001

BlankSystemDaemon posted:

Moreover, what happens if a (group of) corporation(s) gets control of the Linux kernel?
What exactly do you mean here? Even if someone adversarial "gains control" of the Linux kernel, they can't retroactively relicense the kernel source code. Due to the sheer number of Copyright claimants on the kernel, it will forever be licensed the terms of the GPLv2 as distributed with the kernel itself.

We've already had this exact event happen when Oracle bought Sun and through their weight around with MySQL. What happened was predictable: MySQL was forked as MariaDB and regardless of which of those products you use, the existence of the other keeps them both honest. It's not like the world abandoned MySQL for Postgres or something.

Anyways, most of the time when I write software it's contractually-scoped to be deployed on a specific target platform. I don't intentionally go out of my way to write non-portable code, but if that platform offers specific benefits I'll often make use of them. If the scope changes and I need to make it more portable then I'll shim/compat/replace stuff as needed to achieve that. That said, if you're an open source project trying to achieve maximal adoption then, yes, portability should be a primary concern.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
The world does appear to have been slowly losing interest in mysql for something else .... (not Postgres or Maria)
https://trends.google.com/trends/explore?date=all&q=mysql,mariadb,%2Fm%2F05ynw

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

BlankSystemDaemon posted:

Moreover, what happens if a (group of) corporation(s) gets control of the Linux kernel? This isn't nearly as unlikely as a lot of people seem to think, since the Linux Foundation is already just a business association with no requirement that they produce anything for the public good.
That happens pretty regularly to all sorts of free software. Arguably it already happened to the kernel, with the only unusual part being that google didn't manage to get the linux trademark when they made android.

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:

VictualSquid posted:

Does anybody know a zfs snapshot manager that uses a pruning philosophy similar to borgbackup's? Where all snapshots are equal when taken, and a lucky one gets promoted to be the daily/yearly one once it gets old.


What is the advantage to promoting a snapshot to a daily/weekly/monthly one over sanoid's way?

FWIW, my home storage setup consists of a mirror zpool that other stuff backups to to via borgmatic; one user per device, one dataset per device, with ssh restricted to borg serve for each. I keep zfs snapshots of the borg backups just as another safeguard against something going wrong.

If you have zfs on other devices sure, just zfs send it over and let the backup machine manage its own snapshots.

I wouldn't recommend glusterfs. It's a level of janitoring I wouldn't want at home and the backup strategy is typically to use rsync…

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Is there a reason why the /dev/shm mount isn't in fstab on default Ubuntu and Debian configs? Is it always set to half of RAM, and is there any good reason for that?

I stumbled on this when like a moron I accidentally did a umount -a when I meant to mount -a, and then /dev/shm was not remounted when I did mount -a. I'd appreciate a quick blurb about the history of /dev/shm and why we don't have tmpfs configuration info in /etc/fstab.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Antigravitas posted:

What is the advantage to promoting a snapshot to a daily/weekly/monthly one over sanoid's way?
Mostly that I am used to it. But I also feel that can switch backup strategy more dynamically.

RFC2324
Jun 7, 2012

http 418

I support gluster at work and its constant work, and apparently version 7 broke self healing so

ExcessBLarg!
Sep 1, 2001

Twerk from Home posted:

Is there a reason why the /dev/shm mount isn't in fstab on default Ubuntu and Debian configs?
These days it's automounted by systemd. I assume you can put an entry in /etc/fstab for /dev/shm and control options for it--if you really want--but systemd mounts it even without an /etc/fstab entry since some (essential?) userspace programs are dependent on /dev/shm existing and it would be bad to break those because the configuration doesn't exist.

Twerk from Home posted:

Is it always set to half of RAM, and is there any good reason for that?
So that if something accidentally fills /dev/shm it doesn't fill all your RAM.

If you mean, "why does half my RAM go to /dev/shm?" that doesn't actually happen. tmpfs-based file systems allocate pages on demand, so they take up very little memory when not actually used.

Twerk from Home posted:

I'd appreciate a quick blurb about the history of /dev/shm and why we don't have tmpfs configuration info in /etc/fstab.
See this.

As for the history, it depends. Debian-based systems prior to systemd mounted the tmpfs on /run/shm and linked /dev/shm to that. Before /run was a thing, it was mounted on /dev/shm. At some point in the past /dev/shm didn't exist. At some point in the past before that tmpfs didn't exist. For a while these things were all specified in /etc/fstab as distribution defaults.

Truga
May 4, 2014
Lipstick Apathy

BlankSystemDaemon posted:

Newer in what way? WeeChat and irssi are both actively maintained with regular patches.
Feature-wise, they're pretty much comparable - except that irssi supports FreeBSDs capsicum sandboxing framework.

just in a "didn't exist in the 90s" way :v:

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Antigravitas posted:

FWIW, my home storage setup consists of a mirror zpool that other stuff backups to to via borgmatic; one user per device, one dataset per device, with ssh restricted to borg serve for each. I keep zfs snapshots of the borg backups just as another safeguard against something going wrong.
How efficient are those snapshots of the backups? I imagine that especially with encrypted backups they are basically copies without saving space.

Mr. Crow
May 22, 2008

Snap City mayor for life
Re: discord chat the flatpak version has always been pretty reliable on fedora for me.

RFC2324
Jun 7, 2012

http 418

Mr. Crow posted:

Re: discord chat the flatpak version has always been pretty reliable on fedora for me.

Oh yeah, pretty sure I can grab the flatpack on opensuse too.

Still gonna check out the cli stuff for maximum dork value

BlankSystemDaemon
Mar 13, 2009




ExcessBLarg! posted:

It's like VNC/RDP, you login to a desktop and when you disconnect from the session the desktop (and applications) remain running on the server. When you connect to the session again the applications appear again. This isn't suspending either, as the applications will reflect their most recently updated state.

See this. Instead of direct tunneling the X11 protocol it's basically a chain of nested/proxied X11 instances.

If.

Anyways, my main issue with NX years ago is that the core technology was open source but it was clearly a commercial effort, which meant there was no real effort to integrate it back into the community. If you were OK running the proprietary version that was fine as long as it worked on your platform, but if it didn't it was a giant pain in the rear end. Not supporting FreeBSD (which, FreeNX might actually supported it, I can't recall) is part of that giant pain in the rear end.

You're preaching to the choir here. I personally care about Linux servers because that's all I have to deal with, but I also care that software is generally portable for reasons you mention.
With RDP, in its proprietary implementation by Microsoft, you login to a new session under the same user as the one that might be running on the system already, so far as I know.

The original source for the claim made on Wikipedia doesn't really explain how it's done (which is not unexpected, since it was proprietary?), and it's not exactly obvious from the source code.
Making synchronous operations asynchronous by buffering them seems like a great way to end up in trouble.

FreeNX and NXserver does seem to have existed in FreeBSD Ports for almost a decade, so it's even more of a mystery why X2Go wasn't directed to from what was the initial upstream.

ExcessBLarg! posted:

What exactly do you mean here? Even if someone adversarial "gains control" of the Linux kernel, they can't retroactively relicense the kernel source code. Due to the sheer number of Copyright claimants on the kernel, it will forever be licensed the terms of the GPLv2 as distributed with the kernel itself.

We've already had this exact event happen when Oracle bought Sun and through their weight around with MySQL. What happened was predictable: MySQL was forked as MariaDB and regardless of which of those products you use, the existence of the other keeps them both honest. It's not like the world abandoned MySQL for Postgres or something.

Anyways, most of the time when I write software it's contractually-scoped to be deployed on a specific target platform. I don't intentionally go out of my way to write non-portable code, but if that platform offers specific benefits I'll often make use of them. If the scope changes and I need to make it more portable then I'll shim/compat/replace stuff as needed to achieve that. That said, if you're an open source project trying to achieve maximal adoption then, yes, portability should be a primary concern.
As Pablo Bluth points out, the MySQL users didn't go to postgres or MariaDB, they went ~elsewhere~, whereever that is.

VictualSquid posted:

That happens pretty regularly to all sorts of free software. Arguably it already happened to the kernel, with the only unusual part being that google didn't manage to get the linux trademark when they made android.
The point I think I missed making is that if that happens, there's no guarantee that Linux will continue to be popular.

If portability isn't a goal, what happens to all the Linux-only software?

RFC2324 posted:

I support gluster at work and its constant work, and apparently version 7 broke self healing so
:ohdear:

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

ExcessBLarg! posted:

These days it's automounted by systemd. I assume you can put an entry in /etc/fstab for /dev/shm and control options for it--if you really want--but systemd mounts it even without an /etc/fstab entry since some (essential?) userspace programs are dependent on /dev/shm existing and it would be bad to break those because the configuration doesn't exist.

So that if something accidentally fills /dev/shm it doesn't fill all your RAM.

If you mean, "why does half my RAM go to /dev/shm?" that doesn't actually happen. tmpfs-based file systems allocate pages on demand, so they take up very little memory when not actually used.

Thanks so much for this. I'm assuming that I could have asked systemd to remount it after my mistaken umount -a , rather than trying to remount it via mount -a?

I noticed that Python multiprocessing couldn't work without /dev/shm, all sorts of userspace applications put flags there to communicate.

Are tmpfs filesystems formatted with ext4 / xfs / distro appropriate default filesystems, or are they using something that's better suited for in-memory filesystems somehow?

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009




Twerk from Home posted:

Are tmpfs filesystems formatted with ext4 / xfs / distro appropriate default filesystems, or are they using something that's better suited for in-memory filesystems somehow?
What you're talking about is how MFS is implemented in 4.2BSD (long before Linux existed, which might explain why it made sense at the time), whereas tmpfs is a separate filesystem - on the BSD side of things, it started out in NetBSD before making the rounds to FreeBSD, DragonFlyBSD, and finally OpenBSD.
I think the same is true for tmpfs on Linux, though it derives parts of its code from ramfs, while adding the ability to deal with swapping.

The original paper from the Sun implementation might be a good read for understanding how it works, as that's where all implementations ultimately derive ideas from.

BlankSystemDaemon fucked around with this message at 19:15 on Oct 13, 2021

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply