Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Volguus
Mar 3, 2009

RFC2324 posted:

CentOS is pretty much the gold standard for a virtual host, from what I have seen. Run that, then just spin up VMs for whatever you want to do, with the best OS for it.

Why not go with ESXi? Wouldnt it have less overhead?

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009

DOA posted:

I switched to linux about 6 months ago and sometimes when im gonna install something it says like "Error: This package is uninstallable Dependency is not satisfiable" so i think thats ok i just install the dependency missing, but then that dependency need even more dependencies and so on, and i end up using 1 hour to install something (most stuff are easy to install but i go crazy sometimes because of this)

That hasn't been a thing for the last 15 years at least. It used to be back in the days without dnf, yum or apt (or internet connection) and it was painful, true. Not today.

Volguus
Mar 3, 2009
I think i just threw up a little.

Volguus
Mar 3, 2009
FreeBSD should boot a SMP kernel by default, when the installer detects that you have SMP. Apparently it didn't and neither does when actually booting your own SMP kernel. Does the BIOS show you multiple cores? Are they enabled? If you boot a linux livecd/liveusb does that see multiple cores (cat /proc/cpuinfo)?

Volguus
Mar 3, 2009

VostokProgram posted:

Systemd actually works tho

When it does. When it doesn't ... god help you. Though, as a developer I must say that writing systemd service files is a shitton easier than init scripts.
As for filesystems, I was quite happy with XFS on a workstation that I had (at work) several years back. Until the inode bug bit me. I have a vague recollection of the specifics but it meant that suddenly my project would not compile., with the weirdest and most cryptic error messages (can't find this, cant link to that, all things that were there, etc.) It turns out that xfs had 64 bit inodes and gcc didn't (or didn't know what to do with them, or something). Copy the project into another folder ... we're back in business. Wasted 2 days on that, days that i'll never get back.

Volguus
Mar 3, 2009

nem posted:

I found systemd to patch a lot of shortcomings in SysV. It works. it works very well. Wiring in pre/post-conditions that can operate independent of system scripts that may or may not be overwritten by packages, absolutely fantastic. Overriding OOM/nice scores? Beautiful. It's not perfect yet. It requires readaptation and still has some ways to go, but gently caress if it isn't a blessing over the previous pile of manure that persisted for 30+ years in most distros.

Edit: see also, sendmail -> Postfix.

If only Lennart would be ... reasonable (or sane or ... just loving normal). The way bugs are dealt with in systemd is beyond abysmal. The latest famous ones are those that he got a PWNIE prize for (https://www.theregister.co.uk/2017/07/28/black_hat_pwnie_awards/). To be so opposed to have a CVE filled? WTF? So defensive of his project that I'm not even sure is healthy.
A lot of systemd criticism is actually directed at developers and their behaviour. Being shoved down the throats of everyone doesn't help either.

Volguus
Mar 3, 2009

evol262 posted:

So systemd is still doing its job? systemd-network isn't the default anywhere I know of. I'd probably do the same
Lennart is not in charge of CVEs. oss-sec and the srt are. He's also correct that a lot of potential vulnerabilities (and there are a lot in a C project running as root and init) are extremely minor or difficult to exploit.

I'd also be opposed to putting "CVE-..." in the git log, because it isn't self-documenting at all. This is reasonable IMO.

I have problems with filing systemd bugs. They almost always get closed until I re-open and explain why it's actually a problem. There are process problems. This is not one of them.

It's not "being shoved down your throat", though? By whom? Your distro maintainers, who decided systemd was a better technical solution than sysvinit or upstart? I encourage everyone who parrots the "Redhat is forcing systemd" to read the Debian steering committee's debate (or SuSE's, or Gentoo's). Open source doesn't work by fiat. Even major projects (mysql, x11, etc) have debates and fork. Sometimes the fork wins. Sometimes it doesn't. It's on merits.

Well, the users are feeling like it's being "shoved down their throats" by their distributions. Since systemd has grown so large and is covering so many (useful indeed) tasks, programs do get dependent on it (Gnome & co for example). And then you find yourself in a position to not be able to easily move to a non-systemd system. It even prompted OpenBSD (at least on their mailing list, no idea if they did it for real) to make an systemd shim API, just to make stupid programs compile and run without too much hassle.

That's not, of course, Lennart's fault, is actually his victory. But a lot of people are not happy. Including IT guys that claim they have lost precious logs because journald hosed up. Sure, you can send them to syslog too, but is not a "by default" setting.

Overall, systemd filled a need. It is a bit too successful at "filling needs" for one project to handle though. And the developers working on it are sometimes behaving like children (which is off-putting too).

Volguus
Mar 3, 2009
Personally, for almost a decade now I went to KDE. Yes, is on the heavy side but it has options galore and you can pretty much do anything you want with it. If you don't have the power of an RPi, you can run it (even then, but you may want something ... simpler).

Volguus
Mar 3, 2009
Fedora combines the best of both worlds: stable but quite recent versions of packages.
Ubuntu: they took a nice and stable debian and hosed it up, but through very persistent marketing since 2004 they have managed to win the hearts of linux newbies everywhere. Every lovely proprietary program out there will have a ubuntu deb file.
Debian: for when you are fine to run programs created before dinosaurs walked the earth. Quite stable though and has tons of packages (if an open source program does not have a debian package, you didn't want it in the first place)
Arch: flexible, relatively hard to install for a newbie (if reading documentation is hard, not otherwise), but you get the latest and greatest versions of packages. And buggy. Quite buggy.
Gentoo: more stable than Arch, just as flexible, hard to install for newbies, but you compile (almost) everything from source. Meh...
Slackware: hey, it's still alive.
Open SuSe: good, stable, recent enough packages, very capable system administration tool (yast), but ... different. In some way it just always felt ... off.

Volguus
Mar 3, 2009

RFC2324 posted:

from what i understand it was insane to implement, because of the differences between win32 and posix.

i could be wrong

The "Line" infrastructure? Definitely. Rumors say that it was originally a project to run Android apps on windows mobile which has been abandoned and then resurrected as linux on windows (essentially Line). This particular trick after the infrastructure was done? Probably not so much.

Volguus
Mar 3, 2009

Paul MaudDib posted:

I use Cygwin with OpenSSH to connect to my servers. Oftentimes I will then run 'screen' on the server to allow a long-running command to complete without occupying the SSH session, or to allow me to disconnect. However, when I am inside a screen session it somehow "captures" my scroll-wheel on the mouse so that I am scrolling back through my bash command history instead of scrolling the terminal output history back. It works normally as long as I am not inside a screen.

Is there a way to force it to scroll the output instead of the command history?

ctrl-a esc

then esc to exit the mode

Volguus
Mar 3, 2009

Paul MaudDib posted:

Is there a way to "hand the MWHEELUP back to the shell" so to speak, to let the client's OS's native textbuffer mode kick in so scrolling happens "naturally" on the client OS scrollbar?

I have never done it but according to this stackoverflow post is possible:

quote:

Add this to your ~/.screenrc.

# Enable mouse scrolling and scroll bar history scrolling
termcapinfo xterm* ti@:te@

Volguus
Mar 3, 2009

Furism posted:

I'm looking for a simple-ish, fast monitoring solution that Just Works ; to run from a Raspberry. Basically I'd like to be able to monitor my CentOS-running NAS, a MySQL server, maybe health check some website to check WAN connectivity, that sort of things. I don't mind installing agents if I have to (if the said agents are not too troublesome to setup on major distros like CentOS or Raspbian), but SNMP OIDs would be fine too I guess. This is just for some home monitoring so nothing critical. I'd like a clean, light interface with graphs. Does this exist?

This is a shot in the dark since the solution is available since before the dinosaurs walked the earth (so you probably tried it already), but what is wrong with MRTG for your needs? Worst case scenario ... nagios? It has a bazillion agents, but i personally never tried it. At home I just MRTG and is .... fine. Old,. but fine. And adding scripts (that do whatever that spits out 2 numbers) for it is quite easy.

Volguus
Mar 3, 2009

fletcher posted:

My CentOS 7 VM in VirtualBox suddenly is stuck in 1024x768. I tried upgrading to the latest VirtualBox, did a yum update, and reinstalled guest additions. I still can't figure out what is causing the issue. How can I go about fixing this thing?

Did the VM hardware change? What's the host OS? What does xorg.conf say? Or dmesg? Maybe you're running with the vga driver because somehow a better one cannot be loaded.

Volguus
Mar 3, 2009

fletcher posted:

Nope no VM hardware change. Host OS is Windows 10. I think it may have broken around the same time as a Windows 10 update, not certain about that though.

I couldn't find an xorg.conf, all I see is /etc/X11/xorg.conf.d/00-keyboard.conf

I do see some errors in /var/log/Xorg.0.log:
code:
Failed to load module "vboxvideo" (module does not exist, 0)

Oh, maybe that CentOS updated to 7.4 and no virtual box drivers are available yet. Nothing you can do for now, unless you want to hunt down some development versions of those drivers or compile them yourself. But, i am talking out of my rear end since i am not running CentOS in Virtual Box, and those drivers could be in mainline but broken now.

Volguus
Mar 3, 2009
VMWare player is fine (performance wise), the only problem with it is that if you're running a distro that's relatively on the cutting edge (Fedora), the kernel modules can be a pain to fix/compile. KVM is better in every respect. I will get a dual-xeon workstation that i bought from EBay next week and I was pondering between esxi or KVM and after much deliberation and internet research I came to the conclusion that I have no reason to choose esxi at all. virt-manager works like a charm, there are even web interfaces to KVM ( 100+ frontends), and one can play with docker on the host as well. The only question I still have is whether to go with CentOS (stable but older) or jump into the pit with Fedora.

Volguus
Mar 3, 2009

evol262 posted:

Fedora has been pretty stable since 22 or so

Fedora is awesome, I'm using it as my normal daily desktop. But I wonder if I'd like to update the poo poo out of the server every day. As my desktop, sure ... it's perfect, wouldn't change it for anything. The stability of CentOS is appealing though.

Volguus
Mar 3, 2009

Furism posted:

Which one is generally considered "best"? I don't need uber advanced GUI, just create/delete/clone VMs and basically equivalent of the VMWare web interface would be good (feature-wise I mean - don't care about the fluff).

I have no idea. Will hopefully do when i get to play with it. I was looking at Kimchi as being easy and light enough for my needs. From what I've read Proxmox and oVirt are the most popular, but Proxmox wants to be installed in its own distribution (Debian based) so that's a no-go. A lot of them are targeted to the datacenter and I am not one. But, I use the normal Virt-manager anyway, and it can connect to remote hosts via SSH and at least so far it didn't give me any trouble for my local VMs (that I want to get rid off from my desktop to put on that machine). Therefore, I wonder if a web interface is even needed.

Volguus
Mar 3, 2009

Thermopyle posted:

It's kind of weird to me that there's not more choices of web frontends to KVM aimed at the space of home/small-business servers where you've got just one machine, running a few VMs.

oVirt and Proxmox both seem like they're way overengineered for that use case.

Kimchi seems decent, it's just a little surprising to me there's not more competition in this area.

I always get a little skeeved out by OSS projects whose issues page is as un-maintained as Kimchi's.

Maybe I should take that on as my next side project. (hahah, that means it will go on to my list of side projects that grows much faster than items are completed)

Probably there are, but since that's not where the money is, they become abandoned after the developer loses interest. Giving away free stuff to peasants like me is not gonna pay the bills.

Volguus
Mar 3, 2009
Those that do hate systemd have moved to FreeBSD. Otherwise, unless you cannot move from 6 (or 5) because of some program that you cannot change and that has to run, people have moved to 7. Personally (as a developer, not admin) I would give a kidney to not have to write old startup scripts ever again. Systemd made my life a whole lot easier.

Volguus
Mar 3, 2009
And dont forget to add that mountpoint to /etc/fstab so that it will be mounted next time your computer boots up. And tell your plex installation (why would anyone wanna use that???? but that's another discussion...) to get its movies from there.

Volguus
Mar 3, 2009

RFC2324 posted:

it's there another thing that will allow me to stream too my tv with an app on my phone as if it was Netflix? everything else i have seen is more complicated and a pain to teach my wife to deal with (plex was hard enough to get her to learn)

I have no idea. Attaching an intel nuc to the TV, installing an OS on it works for me. Some people would prefer Kodi as it makes that thing even easier. I don't like Kodi, so I wrote my own thing. Or just simply use VNC, that's another option. but if you like Plex then ... have at it. The main feature of Plex that I think people like is the ability to stream to a device even when out of the house. Since that's not a something I want or care about ... Plex is not that appealing to me.

Again, I'm only talking about myself here, if you're happy with what you've got, there's no reason to change.

Volguus
Mar 3, 2009

Thermopyle posted:

You're post implied it was a weird and strange thing.
I do believe it is, since they require you to make an account there, the data that you stream gets sent to their servers (so that they can re-send it to your phone) and in general they (the corporation, their servers, not just the application itself) have a creepy insight on everything that you do with your installation. While I know I'm not a special snowflake in any way, their application/service does make me uncomfortable.

evol262 posted:

I think most people use Plex to stream to devices they already have -- fire TVs, rokus, mobile devices, etc instead of having SFF computers attached to everything like it's 2005
2005 was a good year, what do you have against it? If not wanting to send my movies to them makes me an old fart ... fine. :colbert:

Volguus
Mar 3, 2009
At the end of the day, isn't it simply easier to just open your media player and play the file that you want? Sure, maybe they're not spying or anything (unlikely) but from whatever i've read on their website it just seems a lot of hassle for no benefit whatsoever (ability to watch a movie on a 5'' screen doesn't count as a benefit). And one still needs the video player software anyway, which on devices like consoles or smart TVs may or may not support the video formats that you downloaded your stuff in, which would mean that you'd have to encode them to video format, which ... sigh, just why? Open up the player and just play it from the NAS or wherever it is and don't worry about it.

Volguus
Mar 3, 2009
Well, i'm not particularly sure about the convenience part. Why? It does require one to make an account. Now, you have two options (like with any account):
1) Use a password manager, therefore you do not know the password and you have to have access to both the database and the application when you are on the go (since you're not using one of those web based password managers, right?). At which point, you may as well have that drat movie on your usb/phone/tablet anyway.
2) Do like most people do and either use 1234 as the password or just reuse it, like you have done so many times in the past. At which point you just gave access to your media files to anyone who wants it.

And, to be fair, even with option 1) it is very likely that whoever wants to already has access to their database, accounts and everything. Hell, is not like security is a thing that matters to companies (any/all of them), as the last little while has taught us. To pretend otherwise is just naive. So no, having access to your media from outside your own network is obviously not an option, so them requiring an account to be able to play files on your own network is definitely a blocking requirement.
With that being said, hell .. if you (people in general) want to use said service, knock yourself out. But getting all up in arms when a stranger on the internet says that it is wrong, weird and strange ... well, it is weird, wrong and strange.

Volguus
Mar 3, 2009

mike12345 posted:

I'm thinking of setting up a new system next year, but I'm a bit skeptical about dual boot Linux - Windows 10. Is there (still) a risk that Windows decides to erase the Linux disk because it thinks it's faulty, or was that FUD?

Well, that's a new one, never heard of it. I've always had a windows partition on my home computer and that goes back to ... 1998 or so. Unless I had way too many beers and selected the wrong thing to install an os on (being it windows or linux or whatever) none of them ever gave me any trouble. Sure, in the old days a windows reinstallation would overwrite the MBR, but that wasn't a problem to fix and reinstall lilo or grub.

Volguus
Mar 3, 2009

thebigcow posted:

If I install Fedora without a separate /home will I be kicking myself when it's time to upgrade to the next version?

No, but it can help if you want for whatever reason to re-install or install another distro (just choose the same /home). That's the only thing. Of course, you can backup your /home prior to installing/reinstalling things but that's :effort:

Volguus
Mar 3, 2009

Zero Gravitas posted:

Can anyone recommend a way to remote desktop to a fedora machine from a win 10 one? I used to be able to do this with an earlier version of fedora but now my new machine just gives tigervnc a black screen which is apparently a known issue that doesnt look like its going to get fixed anytime soon.

EDIT: This isnt urgent. Realised I asked this in this very thread seven months ago. I'll give that old stuff another go tomorrow night when its not 1am in the morning.

I always installed cygwinX on a windows machine and it always worked perfectly. Normally I would use ssh -X (or ssh -Y) to login into the remote machine, then start whatever application I would need.

Volguus
Mar 3, 2009

RFC2324 posted:

If you are going to do X forwarding, WSL with an xserv(xming is the goto) is much easier to deal with that cygwin.

I guess to each his own.

Edit: Is this XMing that hasnt been updated since 2007? In 2008 I tried to use it and it wouldn't work on whatever windows version I had at the time. Clicking next in cygwin never seemed hard but hey ...

Volguus fucked around with this message at 02:29 on Oct 27, 2017

Volguus
Mar 3, 2009

Thermopyle posted:

This is me in all respects. In fact, I just set up xming again this afternoon and it took about 2 minutes. Install it, turn on x11 forwarding in putty, done.

I setup cygwin (click next), run xinit, ssh -X host. Done.

Volguus
Mar 3, 2009

RFC2324 posted:

Its been years since I bothered with cygwin, but just xinit never worked right for me without having to gently caress with configs for a couple hours, and even then performance was crap. Last time I touched it was 2005(?) so maybe the stock install is less complete poo poo now.

Haha. Here i come with xming experience from 2008 (shitshow, didn't work, maybe the paid version is better) and you with cygwin from 2005. Both cases are from 10 and 15 years ago, I woudl think that's safe to assume they don't match anymore (except xming, since the free version is still from 2007).

When I'm on windows and need that X server I always install cygwin.Last time i did that was 3 months ago on a windows 10 machine. Like always, worked without a hitch. Should it stop working ... yes, definitely I would try something else, especially now with windows 10 having that linux-on-windows system. But until then ... meh, it does what I need it to do.

Tigren posted:

I have no idea what Sway is, but Fedora has been shipping Wayland as the standard for about a year now and I believe Ubuntu made it the default in their own newest 17.10 release. It's definitely ready for production.

Sway is a window manager for wayland, similar to i3 (i3 is for X11). The only videocards that have trouble with wayland (as far as I know) are nvidia with the proprietary driver. Nouveau maybe works (I have a 970 so nouveau doesn't work on that), but the nvidia proprietary driver doesn't.
Just the other day I tried gnome-wayland in F27. While, to much of my surprise, it started and it showed me the desktop, it was unusable since gnome-shell was eating 1500% CPU. Weston by itself didn't start. Now, it could be F27-beta bugs, but ... yea, I will try it again maybe next year.

Volguus
Mar 3, 2009

Thermopyle posted:

Is there something free that compares to lansweeper that I can run on my linux home server?

To save you a click, lansweeper is basically a network scanner with fingerprinting and all that jazz that serves up a nice web ui.

Of course, I could just shove lansweeper in a VM and run it there, but it'd be nicer to have something native...

You want the web ui or the scanner? Because the scanner is nmap. Console only (no idea if there's a UI for it). Lansweeper most likely is using nmap themselves (just a shot in the dark).

Volguus
Mar 3, 2009
I would really be wary of those browser plugins that fill passwords from online password managers. They had bugs in the past where they could be tricked to fill in the password for a different site and surely they aren't 100% now nor can they ever be 100% (don't remember if it was 1password or lastpass , but it doesn't really matter).

Volguus
Mar 3, 2009
Speaking of "KeeWhateverPass", which one is the latest one? Looking at my KDE menu I have 3 Kee programs showing up: KeePass, KeePassX 2 and KeePassXC. UI-wise they're ... fine, whatever, nothing to see here. But under the hood, is there a difference? Is one more competently updated than the other one?

Volguus
Mar 3, 2009

Mr Shiny Pants posted:

Yeah, it wasn't loaded. It seems like my kernel got an upgrade to .17 and now the drivers won't load. Something about an unknown symbol. I tried a git clone again but the build fails:

code:
/linux/include/linux/compiler-gcc.h:3:2: error: #error "Please don't include <linux/compiler-gcc.h> directly, include <linux/compiler.h> instead."
 #error "Please don't include <linux/compiler-gcc.h> directly, include <linux/compiler.h> instead."


The easiest thing is to keep booting the old kernel. But if you have the source of the driver you can fix it if you want and know.

Volguus
Mar 3, 2009
You are probably fine if you just zero out the first 512 bytes since that's where the partition data is stored. To be on the safe side, wipe few kilobytes. Otherwise you're gonna wait forever for dd to zero out a 30GB sdcard.

Volguus
Mar 3, 2009
It can also be that the blocks were bad from before but only when you created the recovery system, wrote something to the new partition, etc. is when it actually poo poo the bed. They are better than floppies, but that isn't saying much.

Volguus
Mar 3, 2009

fletcher posted:

Bumping this one...any ideas?

I am not familiar with Logwatch, but surely they can't have the same config files since one works and one doesn't. root@ubuntu is just the default email address of a user in a *NIX system: user@host.
So one instance takes the email address of the user is running under, while the other doesn't. I'd re-check the conf files.

Volguus
Mar 3, 2009
Well, this is one kind of slashdot post that I haven't seen in at least 15 years if not more: Could 2018 Be The Year of the Linux Desktop? .

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009

Alpha Mayo posted:

This is what linux does to me. That and because I want to use UEFI for everything instead of legacy because it will shave like 3 loving seconds off the boot time, even though my mobo has halfassed UEFI implementation that wants to fight me all the way.

In theory, UEFI is like the second coming and is perfectly deserving of the praise. In practice, if a MB manufacturer hosed up the old BIOS, it can and will gently caress up the UEFI. Yes, there's one unified interface and 100 billion implementations in which the same function call will do 100 billion different things, one of which is what you want. But at least we got a pretty mouse-driven motherboard UI.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply