Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Molten Boron posted:

I've got a RHEL 6 server with a broken RPM database. None of my attempts to rebuild the DB have worked, so I've been steeling myself for an OS reinstall. Can I get by with an in-place upgrade, or will nothing short of a full install fix the problem?
Did you rm /var/lib/rpm/db* before running rpm --rebuilddb?

Adbot
ADBOT LOVES YOU

Longinus00
Dec 29, 2005
Ur-Quan

Shaocaholica posted:

Which kernel versions have USB attached SCSI? I can't seem to find clear data on that.

Git shows UAS was added in 115bb1ff (~2.6.37). From your quote I doubt the code gets much use consider how long it has been marked broken and it looks like it's currently marked broken in 3.13.

Molten Boron
Nov 1, 2010

Fucking boars, hunting whores.

Misogynist posted:

Did you rm /var/lib/rpm/db* before running rpm --rebuilddb?

Yes I did.

xdice posted:

I'm assuming you've tried "rpm --rebuilddb" - can you paste in the error(s) you're seeing?

I can't access the server from home so I can't give you the full list of errors right now, but they start something like this (taken from here):

rpmdb: /var/lib/rpm/Packages: unexpected file type or format
error: cannot open Packages index using db3 - Invalid argument (22)


I would've tried rebuilding from /var/log/rpmpkgs, but Red Hat moved that out of the base rpm package for RHEL 6, and I didn't realize it was missing until it was too late. :doh:

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Molten Boron posted:

Yes I did.


I can't access the server from home so I can't give you the full list of errors right now, but they start something like this (taken from here):

rpmdb: /var/lib/rpm/Packages: unexpected file type or format
error: cannot open Packages index using db3 - Invalid argument (22)


I would've tried rebuilding from /var/log/rpmpkgs, but Red Hat moved that out of the base rpm package for RHEL 6, and I didn't realize it was missing until it was too late. :doh:
Dumb question, but are you out of disk space? I know I've seen this before and it was some completely asinine condition, but I can't remember what it was.

Can you also post the output of ldd /usr/bin/rpm?

evol262
Nov 30, 2010
#!/usr/bin/perl

Molten Boron posted:

I've got a RHEL 6 server with a broken RPM database. None of my attempts to rebuild the DB have worked, so I've been steeling myself for an OS reinstall. Can I get by with an in-place upgrade, or will nothing short of a full install fix the problem?
An in-place upgrade will not resolve this, since rpm doesn't have a real way to scan the system and reassemble the database from comps (you can script it yourself, but it's very nasty).

Misogynist posted:

Did you rm /var/lib/rpm/db* before running rpm --rebuilddb?

`file /var/lib/rpm/Packages` ?
rpm -Vvv rpm (you'll get an errror, but the output in the beginning is what matters here)

yum clean all
rm /var/lib/rpm/__db.0*
rpm --rebuilddb

There are nasty ways to rebuild with db_dump and db_load, but they're not guaranteed, not parsable without writing C, and not worth the effort when you can reinstall.

You can also rpm --initdb, then rpm -i -v --noscripts --nodeps --notriggers --excludepath / ...
But don't. rpm --rebuilddb will almost certainly fix it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
EDIT: I fixed this, solution at the bottom

evol262 posted:

I totally missed this earlier, but it doesn't look like xl2tpd is actually picking it up. Have you tried xl2tpd -D?

What client? nm-applet? racoon?
So this is magically working all of a sudden, but I'm having this weird problem now with a site-to-site tunnel between two EC2 regions where I can't ping from one side to the other anymore; I can't figure out what the problem is. The systems in either region function as combined VPC NAT gateway and VPN server.

My ipsec.conf looks more or less like this on both sides:

code:
# /etc/ipsec.conf - Openswan IPsec configuration file

# This file:  /usr/share/doc/openswan/ipsec.conf-sample
#
# Manual:     ipsec.conf.5


version 2.0        # conforms to second version of ipsec.conf specification

# basic configuration
config setup
        protostack=netkey
        dumpdir=/var/run/pluto/
        nat_traversal=yes
        #virtual_private=%v4:10.2.0.0/16
        virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:!10.1.0.0/16
        oe=off

conn nat-east
        type=tunnel
        authby=secret
        left=10.1.12.244
        leftsubnet=10.1.0.0/16
        leftid=<left-eip>
        right=<right-eip>
        rightsubnet=10.2.0.0/16
        pfs=yes
        auto=start

conn L2TP-PSK-NAT
        rightsubnet=vhost:%priv
        also=L2TP-PSK-noNAT

conn L2TP-PSK-noNAT
        type=transport
        authby=secret
        pfs=no
        keyingtries=3
        rekey=no
        dpddelay=10
        dpdtimeout=90
        dpdaction=clear
        ikelifetime=8h
        keylife=1h
        left=10.1.12.244
        leftid=<left-eip>
        leftprotoport=17/1701
        right=%any
        rightprotoport=17/%any
        auto=add

conn passthrough-for-non-l2tp
        type=passthrough
        left=10.1.12.244
        leftnexthop=%defaultroute
        right=0.0.0.0
        rightsubnet=0.0.0.0/0
        auto=route
The output of ipsec look:

code:
root@nat-west-1:/etc/ppp# ipsec look
nat-west-1 Thu Mar  6 05:31:21 UTC 2014
XFRM state:
src 10.1.12.244 dst <right-eip>
        proto esp spi 0x3c8569e7 reqid 16385 mode tunnel
        replay-window 32 flag af-unspec
        auth-trunc hmac(sha1) 0xd8465c3a2c2d24ba90f1455af9d2b3c549ae71f2 96
        enc cbc(aes) 0x46fba17e8aef5bbca19bcb247bf03f36
        encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
src <right-eip> dst 10.1.12.244
        proto esp spi 0x600304c8 reqid 16385 mode tunnel
        replay-window 32 flag af-unspec
        auth-trunc hmac(sha1) 0xd4f0cf6f1fe428b2a4aa7538bb87be2398eae809 96
        enc cbc(aes) 0xe7659233c0a2c81301611f1b96a1392b
        encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
XFRM policy:
src 10.1.0.0/16 dst 10.2.0.0/16
        dir out priority 2608
        tmpl src 10.1.12.244 dst <right-eip>
                proto esp reqid 16385 mode tunnel
src 10.2.0.0/16 dst 10.1.0.0/16
        dir fwd priority 2608
        tmpl src <right-eip> dst 10.1.12.244
                proto esp reqid 16385 mode tunnel
src 10.2.0.0/16 dst 10.1.0.0/16
        dir in priority 2608
        tmpl src <right-eip> dst 10.1.12.244
                proto esp reqid 16385 mode tunnel
src 10.1.12.244/32 dst 0.0.0.0/0
        dir fwd priority 3104
src 10.1.12.244/32 dst 0.0.0.0/0
        dir in priority 3104
src 10.1.12.244/32 dst 0.0.0.0/0
        dir out priority 2112
src ::/0 dst ::/0
        socket out priority 0
src ::/0 dst ::/0
        socket in priority 0
src 0.0.0.0/0 dst 0.0.0.0/0
        socket out priority 0
src 0.0.0.0/0 dst 0.0.0.0/0
        socket in priority 0
src 0.0.0.0/0 dst 0.0.0.0/0
        socket out priority 0
src 0.0.0.0/0 dst 0.0.0.0/0
        socket in priority 0
src 0.0.0.0/0 dst 0.0.0.0/0
        socket out priority 0
src 0.0.0.0/0 dst 0.0.0.0/0
        socket in priority 0
src 0.0.0.0/0 dst 0.0.0.0/0
        socket out priority 0
src 0.0.0.0/0 dst 0.0.0.0/0
        socket in priority 0
XFRM done
IPSEC mangle TABLES
iptables: No chain/target/match by that name.
ip6tables: No chain/target/match by that name.
NEW_IPSEC_CONN mangle TABLES
iptables: No chain/target/match by that name.
ip6tables: No chain/target/match by that name.
ROUTING TABLES
default via 10.1.0.1 dev eth0  metric 100
10.1.0.0/20 dev eth0  proto kernel  scope link  src 10.1.12.244
fe80::/64 dev eth0  proto kernel  metric 256
iptables -L:
code:
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
iptables -L -t nat:

code:
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
nat_excludes  all  --  anywhere             anywhere
nat        all  --  anywhere             anywhere

Chain nat (1 references)
target     prot opt source               destination
MASQUERADE  all  --  anywhere             anywhere

Chain nat_excludes (1 references)
target     prot opt source               destination
ACCEPT     all  --  anywhere             ip-10-0-0-0.us-west-1.compute.internal/16
ACCEPT     all  --  anywhere             ip-10-1-0-0.us-west-1.compute.internal/16
ACCEPT     all  --  anywhere             ip-10-2-0-0.us-west-1.compute.internal/16
The policies look perfect, but I'm not getting packets from one end to the other. Any idea what might be up?

EDIT: It works! The problem was with the passthrough-for-non-l2tp that I added to deal with the L2TP clients coming in. The issue is that it was set to route the traffic for 0.0.0.0/0, which caught the traffic originating from 10.0.0.0/8 that I wanted to pass through to the other side of the site-to-site VPN. So, I busted out a subnet calculator and put a hole for 10.0.0.0/8 in the middle of the rightsubnet and came up with:

code:
rightsubnets={0.0.0.0/5 8.0.0.0/7 11.0.0.0/8 12.0.0.0/6 16.0.0.0/4 32.0.0.0/3 64.0.0.0/2 128.0.0.0/1}
Now my sites can ping each other, my clients work as intended, and my clients can ping through the VPN to the other VPN on the other end. It's slow as poo poo, but that's a problem for another day.

I hope none of you ever have to deal with IPsec. :)

Vulture Culture fucked around with this message at 08:16 on Mar 6, 2014

spankmeister
Jun 15, 2008






spankmeister posted:

Nope, I had 0.8.0-3 and the latest is -4, one of the bugs fixed is:
1045168 - failure to boot upgrade environment if /var is not on rootfs

Aaand I have a separate var. :downs:

Trying the upgrade later today. :)

And it worked, the fedup update from testing did the trick. :)

calandryll
Apr 25, 2003

Ask me where I do my best drinking!



Pillbug
I'm currently running elementary OS on my school/work laptop, but after a few annoying freezes I'm looking to install something else. Since it is a laptop, should I bother with encrypting my home directory?

z06ck
Dec 22, 2010

calandryll posted:

I'm currently running elementary OS on my school/work laptop, but after a few annoying freezes I'm looking to install something else. Since it is a laptop, should I bother with encrypting my home directory?

I did crypto on an older laptop (core 2 duo/3GB) running mint 16 and noticed no slow downs honestly, go for it if you want.

Stealthgerbil
Dec 16, 2004


Is there any special way I should partition a system if I am mainly going to be using it for running virtual machines?

evol262
Nov 30, 2010
#!/usr/bin/perl

Stealthgerbil posted:

Is there any special way I should partition a system if I am mainly going to be using it for running virtual machines?

With KVM?

Make /var/lib/libvirt/images separate, preferably NFS, iSCSI, gluster, or ceph. Maybe OCFS2/GFS2 if you want a clustered filesystem on a SAN LUN

kyuss
Nov 6, 2004

Quick question: is there anything as a boot manager that allows access to its menu over the network?
I'm dreaming of the possibility to login to my windows box, reboot it and remotely tell the boot manager to boot into linux / VMware ESXi, and vice versa.


vvvv will look into the serial2usb method, thanks!

kyuss fucked around with this message at 20:07 on Mar 8, 2014

evol262
Nov 30, 2010
#!/usr/bin/perl

kyuss posted:

Quick question: is there anything like a boot manager that allows access to its menu over the network?
I'm dreaming of the possibility to login to my windows box, reboot it and remotely tell the boot manager to boot into linux / VMware ESXi, and vice versa.

You can use cobbler or foreman if you don't mind reimaging.

But what you probably actually want is ipmi, or dumping output from GRUB to a raspberry pi over serial

loose-fish
Apr 1, 2005

Xik posted:

I appreciate the input but those have everything pre-configured and pre-installed, including their own desktop environments. If I wanted that, I probably wouldn't be using Arch. Debian or Fedora, or something more stable and established would be my preferred option.

It's ultimately less effort to install something like Arch and configure it then it is to wrestle an existing pre-configured distro to how I want it. Except maybe a minimal Debian base install, that has the best of both worlds. I'm sure there is a reason I'm not just using that :v:.

I was in a similar situation and stumbled upon this which is pretty much an up to date version of the old Arch installer. I have no idea why it's not on the standard ISOs anymore, it worked great for me.

Are there any more options if you want a pretty minimal install and a rolling release model?

I gave Debian testing a try, but I prefer pacman to apt-get/aptitude and I don't really want to manage repositories for the three packages I can't get otherwise (yes, the AUR can be extremely spotty but it's usually fine when its just pulling stuff from git and packaging it ...).

mod sassinator
Dec 13, 2006
I came here to Kick Ass and Chew Bubblegum,
and I'm All out of Ass
I have a main machine running Ubuntu 12.04 with a monitor resolution of 1920x1200. When I connect to this machine's VNC server (the built in Ubuntu vino server) on my laptop with 1440x900 resolution it looks terrible and is impossible to use because of the resolution difference. How can I set things up so when I VNC in I either get a new session at the same resolution as my laptop, or temporarily resize the session that's being connected to? I've searched around and there are a lot of conflicting and confusing ways to do it by hacking xorg configs and whatnot--is there any easy way to do this? I'm not married to VNC either, I just want a way to connect to my Ubuntu machine remotely that works with different resolutions without scaling.

mod sassinator fucked around with this message at 19:44 on Mar 8, 2014

Xik
Mar 10, 2011

Dinosaur Gum

loose-fish posted:

Are there any more options if you want a pretty minimal install and a rolling release model?

I gave Debian testing a try, but I prefer pacman to apt-get/aptitude and I don't really want to manage repositories for the three packages I can't get otherwise (yes, the AUR can be extremely spotty but it's usually fine when its just pulling stuff from git and packaging it ...).

If rolling release, pacman and AUR are requirements you're not leaving much room for alternatives :).

Minimal: Almost every major distro can pull this off (even Ubuntu last I checked). I've recently learned they pretty all have curses based installers to leave you at the same place.

Rolling Release: I think you're pretty much boned if you are looking for a full rolling distro, unless you want to try Gentoo I guess? I haven't actually done a standard distro upgrade in a long time. The server that I run Debian on has hardware failures more often then Debian is updated so I just install the latest version when that happens. I can't imagine modern distro upgrades are any more of a hassle then when Arch decides to implement some major system breaking change though.

Pacman: Have you played with Yum before? I recently had a play, it's quite nice. The output is probably the best out of all the package managers in my opinion. I don't think you'll find another distro which isn't based on Arch that has adopted Pacman.

AUR: I have no idea, if you find something similar let me know. I actually like the AUR, but I manually check the pkgfiles, URLs and any other scripts before I build any package. I think using the AUR behind a non-interactive script or package manager like it's just another official repository is really dumb.

mod sassinator posted:

I'm not married to VNC either, I just want a way to connect to my Ubuntu machine remotely that works with different resolutions without scaling.

This is probably a dumb question, but does what you do over VNC actually require a GUI? Perhaps what ever you are doing has a cli interface that you can control via ssh?

mod sassinator
Dec 13, 2006
I came here to Kick Ass and Chew Bubblegum,
and I'm All out of Ass
Yeah I try to stick to the CLI, but sometimes I need to connect to the machine and use Eclipse/IntelliJ or other development stuff.

I was secretly hoping I could start using a cheap Chromebook as a nice $200 web browser and occasional connect to other machine and get stuff done device.

hifi
Jul 25, 2012

Xik posted:

If rolling release, pacman and AUR are requirements you're not leaving much room for alternatives :).

Minimal: Almost every major distro can pull this off (even Ubuntu last I checked). I've recently learned they pretty all have curses based installers to leave you at the same place.

Rolling Release: I think you're pretty much boned if you are looking for a full rolling distro, unless you want to try Gentoo I guess? I haven't actually done a standard distro upgrade in a long time. The server that I run Debian on has hardware failures more often then Debian is updated so I just install the latest version when that happens. I can't imagine modern distro upgrades are any more of a hassle then when Arch decides to implement some major system breaking change though.

Pacman: Have you played with Yum before? I recently had a play, it's quite nice. The output is probably the best out of all the package managers in my opinion. I don't think you'll find another distro which isn't based on Arch that has adopted Pacman.

AUR: I have no idea, if you find something similar let me know. I actually like the AUR, but I manually check the pkgfiles, URLs and any other scripts before I build any package. I think using the AUR behind a non-interactive script or package manager like it's just another official repository is really dumb.


This is probably a dumb question, but does what you do over VNC actually require a GUI? Perhaps what ever you are doing has a cli interface that you can control via ssh?

What does a requirement of AUR entail? There's a whole bunch of rpmfusion type sites that have stuff not included in the RHEL/Fedora repositories.

Fedora's "rolling release" version is Rawhide, but like you said it's probably a bad idea to use one when large sweeping changes hit the distribution fairly regularly with little user benefit.

mod sassinator posted:

Yeah I try to stick to the CLI, but sometimes I need to connect to the machine and use Eclipse/IntelliJ or other development stuff.

I was secretly hoping I could start using a cheap Chromebook as a nice $200 web browser and occasional connect to other machine and get stuff done device.

I use emacs to edit files remotely, and it looks like eclipse can do the same thing. Maybe you could set up samba/nfs on your computer also?

Xik
Mar 10, 2011

Dinosaur Gum

hifi posted:

What does a requirement of AUR entail? There's a whole bunch of rpmfusion type sites that have stuff not included in the RHEL/Fedora repositories.

I don't think those are the equivalent of AUR. Those are just 3rd party repositories that supply binary packages that aren't in the official repository right?

Some people use AUR in a way that would make those seem like an alternative. But really, PKGBUILD files are just a recipe for building your own packages from upstream sources.

The equivalent would probably be building your own rpm or deb packages from source and then installing that package with the standard package manager.

e: I quickly looked up what's involved in building an rpm package, I think the direct equivalent of AUR would probably be a repository of spec files.

Xik fucked around with this message at 05:40 on Mar 9, 2014

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
They'd be "source rpms" or "srpms". Which you can get from any yum repo.

evol262
Nov 30, 2010
#!/usr/bin/perl

Xik posted:

I don't think those are the equivalent of AUR. Those are just 3rd party repositories that supply binary packages that aren't in the official repository right?

Some people use AUR in a way that would make those seem like an alternative. But really, PKGBUILD files are just a recipe for building your own packages from upstream sources.

The equivalent would probably be building your own rpm or deb packages from source and then installing that package with the standard package manager.

e: I quickly looked up what's involved in building an rpm package, I think the direct equivalent of AUR would probably be a repository of spec files.

ebuilds from Gentoo are a close equivalent. And yes, specfiles. But it's a worthless distinction, really. A better question would be what you think is so great about the AUR and why you can't live without it?

PPAa are broadly equivalent, and both PPAs and the AUR came into existence because packages wanted weren't in the official repos because the author only provided RPMs. At this point, it's more appropriate for Fedora than Ubuntu, but eh.

Why are PPAs a requirements when you can get GPG-signed packages with available srpms (which include specfiles) that you can read if you want to pretend you're diligent and you can really tell (hint: you can't. GPG is much safer than reading pkgbuilds or specfiles; I'd give 100:1 odds that I could set up an official-looking Google Code or github site for a project with a name that's so close to correct you'd trust it and get a pkgbuild which includes vulnerabilities accepted in a week)?

Why is it required?

Xik
Mar 10, 2011

Dinosaur Gum

evol262 posted:

ebuilds from Gentoo are a close equivalent. And yes, specfiles. But it's a worthless distinction, really. A better question would be what you think is so great about the AUR and why you can't live without it?

I could live without it. It's honestly not a massive deal breaker for me, but was for the person I was replying to.

If I moved to another distro in the future for my main desktop (I'll probably switch at next hardware or massive system failure) then it would probably be easier for me to just find alternative applications for the things I can't get from official repos. Hell, most of the things probably don't even need to be installed via the system package manager (like emacs extensions).

evol262 posted:

Why are PPAs a requirements when you can get GPG-signed packages with available srpms (which include specfiles) that you can read if you want to pretend you're diligent and you can really tell (hint: you can't. GPG is much safer than reading pkgbuilds or specfiles; I'd give 100:1 odds that I could set up an official-looking Google Code or github site for a project with a name that's so close to correct you'd trust it and get a pkgbuild which includes vulnerabilities accepted in a week)?

Why is it required?

I don't really understand your argument, I'm obviously missing something. Where are these GPG-signed packages coming from? The whole point of this exercise is to install software that is not in the official repos. If upstream supplies GPG signed packages, that's perfect, problem solved (you already trust them by running their software).

If they are signed by some random person operating a third-party repository, what difference does it make if they are even signed? The whole point is to not trust them.

It's honestly not that difficult to be "diligent" when checking PKGBUILD files, although I'm sure 90% of users don't even bother checking them. Basically all the packages I have from AUR have a source on github. These are repos I already have "starred" and I know are the "official" repo. Things like magit for instance. The urls in terminal emacs have clickable links so I click on them, see it's the "real one", check what else the PKGBUILD script does and then when satisfied, build the package.

SurgicalOntologist
Jun 17, 2004

mod sassinator posted:

Yeah I try to stick to the CLI, but sometimes I need to connect to the machine and use Eclipse/IntelliJ or other development stuff.

I was secretly hoping I could start using a cheap Chromebook as a nice $200 web browser and occasional connect to other machine and get stuff done device.

I use a chromebook for this purpose and I love it. I'm serving iPython Notebook and RStudio Server from my desktop, SSH and command-line crouton in a tab is great. I have Unity on the chroot but the only things I load it for are Zotero+PDFs and PyCharm. I set up a remote interpreter for PyCharm (which you should be able to do as well) but I've barely used it, the Chromebook's powerful enough to run all my tests at least. Highly recommended for Linux people as a tool to connect to servers/remote machines, plus it's surprisingly powerful on its own.

mod sassinator
Dec 13, 2006
I came here to Kick Ass and Chew Bubblegum,
and I'm All out of Ass

SurgicalOntologist posted:

I use a chromebook for this purpose and I love it. I'm serving iPython Notebook and RStudio Server from my desktop, SSH and command-line crouton in a tab is great. I have Unity on the chroot but the only things I load it for are Zotero+PDFs and PyCharm. I set up a remote interpreter for PyCharm (which you should be able to do as well) but I've barely used it, the Chromebook's powerful enough to run all my tests at least. Highly recommended for Linux people as a tool to connect to servers/remote machines, plus it's surprisingly powerful on its own.

Nice! How much ram does your chromebook have and is crouton/chroot reasonably fast to use IDEs/editors? Looking at the Acer C720 but you can really only get the 2GB ram version now and am a little worried it will be slow. Can't beat $200 for a real 9 hour battery life tiny laptop though.

nescience
Jan 24, 2011

h'okay
Am I the only person that's having a terrible experience with running RedHat based distros as a guest OS? I have terrible performance issues with Fedora, while CentOS and ScientificLinux images aren't even detected as installation mediums. I've tried VirtualBox and VMWare workstation. I've tried a lot of Debian derivatives that just works out of the box... what's going on here?

evol262
Nov 30, 2010
#!/usr/bin/perl

Xik posted:

I don't really understand your argument, I'm obviously missing something. Where are these GPG-signed packages coming from? The whole point of this exercise is to install software that is not in the official repos. If upstream supplies GPG signed packages, that's perfect, problem solved (you already trust them by running their software).

If they are signed by some random person operating a third-party repository, what difference does it make if they are even signed? The whole point is to not trust them.

It's honestly not that difficult to be "diligent" when checking PKGBUILD files, although I'm sure 90% of users don't even bother checking them. Basically all the packages I have from AUR have a source on github. These are repos I already have "starred" and I know are the "official" repo. Things like magit for instance. The urls in terminal emacs have clickable links so I click on them, see it's the "real one", check what else the PKGBUILD script does and then when satisfied, build the package.
What I mean is that this argument is absurd, essentially. The point of signing rpmfusion or epel is that you trust the process of the repo, and aur (as a 3rd party analogue) has no equivalent.

It's excellent for you that every AUR package you have installed is one in which you're familiar with upstream and every dependency, with their associated URLs, but it's not reasonable security practice, and clicking on a github link gives you no assurances whatsoever unless you're already familiar, since it would be easy to This discounts reading the pkgbuild every time it changes, and making sure there's corresponding release notes.

The point of trusting a 3rd party repo is precisely the same as that of trusting thawte. EPEL polices their packages, and they're signed and countersigned. If you don't trust them, don't use it, but github at best as trustworthy and probably less so. This is the same argument by which people refused to trust the PA-RISC porting center. Obviously $somepackage self-signing is pretty useless. RPMfusion signing $somepackage is not.

In short, the AUR is no better than downloading tarballs and ./configure && make && make install. Except it's wget-ing them and 99% of consumers don't look any closer, with mechanism of trust. It's convenient and it has a ton of packages, but using it "securely" is time consuming and questionably reliable.

Basically, you should look at what it takes to get a package into the AUR (basically nothing, submit a tarball) versus RPMfusion (sponsorship ->review -> acceptance -> ssh keys -> builds with user certs which expire -> signed -> released) and ask yourself whether clicking a URL is really more trustworthy. I'm not trying to knock the AUR, but it's easier to get into the Arch [community] distro than rpmfusion.free, and the AUR is a cakewalk. It's a really poo poo comparison.

Xik
Mar 10, 2011

Dinosaur Gum

evol262 posted:

What I mean is that this argument is absurd, essentially. The point of signing rpmfusion or epel is that you trust the process of the repo, and aur (as a 3rd party analogue) has no equivalent.

Yes, you have to trust the repository. That's my point. For debian, the alternative to things being in the official repos is adding a whole bunch of third party repositories to apt. How is them being signed helpful at all? It's like the equivalent of filling out the "publisher" field in Windows installers, worthless if you don't trust the source you download it from.

evol262 posted:

It's excellent for you that every AUR package you have installed is one in which you're familiar with upstream and every dependency, with their associated URLs, but it's not reasonable security practice, and clicking on a github link gives you no assurances whatsoever unless you're already familiar, since it would be easy to This discounts reading the pkgbuild every time it changes, and making sure there's corresponding release notes.

I am familiar with every AUR package on my system. I've never needed to check the whole dependency tree because I've never had a dependency that isn't available in the official repository. I know this isn't common, which is why I said it's really dumb that folks use it just like another repository and use a third party tool to make it seamlessly integrate into pacman.

evol262 posted:

Basically, you should look at what it takes to get a package into the AUR (basically nothing, submit a tarball) versus RPMfusion (sponsorship ->review -> acceptance -> ssh keys -> builds with user certs which expire -> signed -> released) and ask yourself whether clicking a URL is really more trustworthy. I'm not trying to knock the AUR, but it's easier to get into the Arch [community] distro than rpmfusion.free, and the AUR is a cakewalk. It's a really poo poo comparison.

I get what you're saying, but you don't have to trust AUR. AUR isn't a "repository" in the traditional sense. I agree with you that most people probably use it like one. I have already said that I think it's a really bad idea to do so.

I think the problem is that you are assuming there is a trusted repository somewhere that holds the package you want. I don't know anything about rpmfusion, so maybe they are a trusted source within the rpm community and are comparable to trusting an official repo. What if the package you want isn't there either?

evol262 posted:

In short, the AUR is no better than downloading tarballs and ./configure && make && make install.

Yes exactly! The PKGBUILD is literally a script for pulling a tarball from upstream and using it to compile a package. I'm not saying it's not. In fact, that's really what I meant when I say "alternative to AUR". Instead of a "trusted" binary repository as an alternative to AUR, it's equivalent is really just an accessible way to build packages from upstream.

loose-fish
Apr 1, 2005

Xik posted:

Rolling Release: I think you're pretty much boned if you are looking for a full rolling distro, unless you want to try Gentoo I guess? I haven't actually done a standard distro upgrade in a long time. The server that I run Debian on has hardware failures more often then Debian is updated so I just install the latest version when that happens. I can't imagine modern distro upgrades are any more of a hassle then when Arch decides to implement some major system breaking change though.
I was on Ubuntu until 10.04 but it was such a hassle to fix all the breakage from distro upgrades. At the time it was "fix a lot of stuff every 6 months" vs. "fix one or two things once in while". As you say maybe upgrades aren't as bad anymore but I really like getting newer packages and the last time I had to do serious maintenance was when systemd was introduced to Arch in 2012.


Xik posted:

Pacman: Have you played with Yum before? I recently had a play, it's quite nice. The output is probably the best out of all the package managers in my opinion. I don't think you'll find another distro which isn't based on Arch that has adopted Pacman.
Yum does look nice, it's been a while since I tried Fedora, maybe I'll throw Rawhide on a VM and check it out.

Xik posted:

AUR: I have no idea, if you find something similar let me know. I actually like the AUR, but I manually check the pkgfiles, URLs and any other scripts before I build any package. I think using the AUR behind a non-interactive script or package manager like it's just another official repository is really dumb.
Yeah, the way I use it the AUR is just a convenient way to get stuff from Github or other platforms like that and turn them into packages I can manage with Pacman.
I guess SlackBuilds are similar but yeah ...

evol262 posted:

ebuilds from Gentoo are a close equivalent. And yes, specfiles. But it's a worthless distinction, really. A better question would be what you think is so great about the AUR and why you can't live without it?
I don't "need" many packages that are not available through the official repositories but if I just wan't to check something out it's less work to pull it in through the AUR than to find repository, add repository, and then delete the repository if it turns out I don't want to keep the package.
For stuff I want to keep permanently a real repository would be preferred.

SurgicalOntologist
Jun 17, 2004

mod sassinator posted:

Nice! How much ram does your chromebook have and is crouton/chroot reasonably fast to use IDEs/editors? Looking at the Acer C720 but you can really only get the 2GB ram version now and am a little worried it will be slow. Can't beat $200 for a real 9 hour battery life tiny laptop though.

I have the Acer C7 (the previous version), it came with 2GB but there's another slot so I upgraded to 6GB. I got it for $150 refurbished so the whole thing was under $200. It's impressively capable, I first installed Unity thinking I'd have to downgrade to something lighter but there's really no need. Sublime Text is snappy, PyCharm is sluggish when it's re-indexing but other than that it's fine. I haven't had to disable any of the live inspection features, though my projects aren't very large.

If the 720 really gets 9 hours that's a great deal. Mine gets closer to 2, maybe 3 if I stick to ChromeOS.

Varkk
Apr 17, 2004

mod sassinator posted:

I have a main machine running Ubuntu 12.04 with a monitor resolution of 1920x1200. When I connect to this machine's VNC server (the built in Ubuntu vino server) on my laptop with 1440x900 resolution it looks terrible and is impossible to use because of the resolution difference. How can I set things up so when I VNC in I either get a new session at the same resolution as my laptop, or temporarily resize the session that's being connected to? I've searched around and there are a lot of conflicting and confusing ways to do it by hacking xorg configs and whatnot--is there any easy way to do this? I'm not married to VNC either, I just want a way to connect to my Ubuntu machine remotely that works with different resolutions without scaling.

Have you thought about using nxclient on the laptop and connecting to freenx on you main PC?

Or even use plain old X forwarding?

I am not a book
Mar 9, 2013

Riso posted:

How does SELinux compare with AppArmour?

Just gonna bump this because I'm curious as well.

evol262
Nov 30, 2010
#!/usr/bin/perl

I am not a book posted:

Just gonna bump this because I'm curious as well.

Short answer: there are more options than these two, though they're the biggest (along with grsec). Lots more options in the kernel config.

SELinux works on extended attributes with contexts for processes and labels for files acting something like an extremely granular traditional UNIX security model. Writing policies is hard. And SELinux is extremely strict by default. Copy files to /var/www as root? They'll probably have the wrong context and apache won't even be able to read them.

AppArmor relies on per-path modules. /usr/sbin/rndc is not the same as /usr/local/bin/rndc, and a policy for one won't affect the other. As a corollary, AppArmor is pretty forgiving out of the box. There's no real notion of a "default" policy. Novell and Canonical are good about writing profiles, and AppArmor is easy to understand.

A vuln in one will probably mean a vuln in the other. But it's reasonable to say that SElinux is more stringent, assuming there are no kernel vulnerabilities. SELinux assumes everything is bad and needs to follow rules. AppArmor says "only these apps need to be watched, and only let them do these things".

Illusive Fuck Man
Jul 5, 2004
RIP John McCain feel better xoxo 💋 🙏
Taco Defender
My coworker wants to change all our servers from CentOS 6.4 to Ubuntu 12.04. I loving hate Ubuntu (probably irrationally). What are the standard arguments for using CentOS so I can convince my boss to veto this poo poo? They don't need to be pure objective Truth, I just want ammunition.

Right now I have this:
Our poo poo works as it is right now.
It's a hell of a lot easier to create RPM packages of our software compared with debs. (or I'm incompetent. idk)
We have working rpm repositories.
It's slightly simpler to manage rpm repos.
We're using Cobbler and cobbler doesn't handle ubuntu or deb repos well. (or I hosed something up when I attempted/failed to provision ubuntu)
I prefer kickstarts over preseed.
I prefer yum over apt.
Our poo poo works as it is right now.

The email I just got says the 2.6 kernel is causing "significant trouble" and I'm like what?? I kinda wish we could just hire someone who really knew what they were doing to handle this stuff but for now it's all me.

Varkk
Apr 17, 2004

Turn it around and ask what going to all this effort to rip out your existing working system will gain for the company.
Ask for details the significant trouble the 2.6.xx kernel in CentOS is causing. I am betting they don't realise about the back ported fixes and features Redhat puts in to the RHEL kernels. The Kernel in RHEL/CentOS 6.4 is probably closer the the 3.12.xx kernel in many regards as a result.

FlapYoJacks
Feb 12, 2009

Illusive gently caress Man posted:

My coworker wants to change all our servers from CentOS 6.4 to Ubuntu 12.04. I loving hate Ubuntu (probably irrationally). What are the standard arguments for using CentOS so I can convince my boss to veto this poo poo? They don't need to be pure objective Truth, I just want ammunition.

Right now I have this:
Our poo poo works as it is right now.
It's a hell of a lot easier to create RPM packages of our software compared with debs. (or I'm incompetent. idk)
We have working rpm repositories.
It's slightly simpler to manage rpm repos.
We're using Cobbler and cobbler doesn't handle ubuntu or deb repos well. (or I hosed something up when I attempted/failed to provision ubuntu)
I prefer kickstarts over preseed.
I prefer yum over apt.
Our poo poo works as it is right now.

The email I just got says the 2.6 kernel is causing "significant trouble" and I'm like what?? I kinda wish we could just hire someone who really knew what they were doing to handle this stuff but for now it's all me.

Feel free to hire me for you linux guy. :unsmith:

As such, he sounds like a fanboy of Ubuntu and not somebody who has a engineering head on his shoulders. If it isn't broken, don't fix it till it is! Leave that poo poo alone. Cent works perfectly fine for your use-case it sounds like, and Ubuntu has a ton of different config files that aren't in the same location or may not even exist that Cent does.

Apache is handled differently as well, so if you have anything that uses .htaccess files, you may have to change them significantly.

In summary: gently caress that guy.

FlapYoJacks fucked around with this message at 02:55 on Mar 12, 2014

JHVH-1
Jun 28, 2002
Tell them to write up a formal proposal of why Ubuntu would benefit along with a migration plan including resource allocation and associated costs, timelines, plans to limit impact (like staggered migration strategy and rollback), differences between the two platforms that may exist and will require extra man hours.

If they think its such a great idea then they should at least be able to put the work into planning and writing a proposal. Otherwise if thats too much work they probably aren't going to back up the actual work to change.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

ratbert90 posted:

Feel free to hire me for you linux guy. :unsmith:

As such, he sounds like a fanboy of Ubuntu and not somebody who has a engineering head on his shoulders. If it isn't broken, don't fix it till it is! Leave that poo poo alone.

This is a bad engineering mindset as well. You shouldn't leave CentOS alone for years with tons of unpatched security vulnerabilities because "it works".

pseudorandom name
May 6, 2007

Hey, it worked for Sony!

Serephina
Nov 8, 2005

恐竜戦隊
ジュウレンジャー
So, uh, I read the Op then skipped down to the last page. Questions/Issues regarding Desktop Enviroments that work better on legacy hardware:

I have a ATI Radeon HD 4650 in an old system. This causes no ends of headaches, as apparently ATI stopped updating their propriatary linux driver ['fglrx'] for older cards. So I'm left with the open source alternative, 'radeon', which sucks balls for any shiny gaming and 3d stuff, or using the last known good propriatary driver, 'fglrx-legacy' which is unmaintained. Oh, and fglrx-legacy doesn't work with newer versions of X, which I understand to be the ONLY window manager in existence.

This leaves me finding older distros such as Ubuntu 12.04.1 (but not 12.04.2, that's too new!) to get a sufficiently outdated version of X. 3d gaming SEEMS to work ok, but (naturally...) this breaks desktop speeds to unusably slow.

This is kinda where I throw my hands up and ask for help -slash- consider Windows.

Suggestions for legacy hardware Desktop Environments?

Adbot
ADBOT LOVES YOU

FlapYoJacks
Feb 12, 2009

Suspicious Dish posted:

This is a bad engineering mindset as well. You shouldn't leave CentOS alone for years with tons of unpatched security vulnerabilities because "it works".

Did I say that? Theres a difference between updating a OS and changing to a completely different distro just because you don't like Cent.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply