|
Molten Boron posted:I've got a RHEL 6 server with a broken RPM database. None of my attempts to rebuild the DB have worked, so I've been steeling myself for an OS reinstall. Can I get by with an in-place upgrade, or will nothing short of a full install fix the problem?
|
# ? Mar 6, 2014 01:47 |
|
|
# ? Jun 13, 2024 05:49 |
|
Shaocaholica posted:Which kernel versions have USB attached SCSI? I can't seem to find clear data on that. Git shows UAS was added in 115bb1ff (~2.6.37). From your quote I doubt the code gets much use consider how long it has been marked broken and it looks like it's currently marked broken in 3.13.
|
# ? Mar 6, 2014 02:26 |
|
Misogynist posted:Did you rm /var/lib/rpm/db* before running rpm --rebuilddb? Yes I did. xdice posted:I'm assuming you've tried "rpm --rebuilddb" - can you paste in the error(s) you're seeing? I can't access the server from home so I can't give you the full list of errors right now, but they start something like this (taken from here): rpmdb: /var/lib/rpm/Packages: unexpected file type or format error: cannot open Packages index using db3 - Invalid argument (22) I would've tried rebuilding from /var/log/rpmpkgs, but Red Hat moved that out of the base rpm package for RHEL 6, and I didn't realize it was missing until it was too late.
|
# ? Mar 6, 2014 02:30 |
|
Molten Boron posted:Yes I did. Can you also post the output of ldd /usr/bin/rpm?
|
# ? Mar 6, 2014 03:28 |
|
Molten Boron posted:I've got a RHEL 6 server with a broken RPM database. None of my attempts to rebuild the DB have worked, so I've been steeling myself for an OS reinstall. Can I get by with an in-place upgrade, or will nothing short of a full install fix the problem? Misogynist posted:Did you rm /var/lib/rpm/db* before running rpm --rebuilddb? `file /var/lib/rpm/Packages` ? rpm -Vvv rpm (you'll get an errror, but the output in the beginning is what matters here) yum clean all rm /var/lib/rpm/__db.0* rpm --rebuilddb There are nasty ways to rebuild with db_dump and db_load, but they're not guaranteed, not parsable without writing C, and not worth the effort when you can reinstall. You can also rpm --initdb, then rpm -i -v --noscripts --nodeps --notriggers --excludepath / ... But don't. rpm --rebuilddb will almost certainly fix it.
|
# ? Mar 6, 2014 04:08 |
|
EDIT: I fixed this, solution at the bottomevol262 posted:I totally missed this earlier, but it doesn't look like xl2tpd is actually picking it up. Have you tried xl2tpd -D? My ipsec.conf looks more or less like this on both sides: code:
code:
code:
code:
EDIT: It works! The problem was with the passthrough-for-non-l2tp that I added to deal with the L2TP clients coming in. The issue is that it was set to route the traffic for 0.0.0.0/0, which caught the traffic originating from 10.0.0.0/8 that I wanted to pass through to the other side of the site-to-site VPN. So, I busted out a subnet calculator and put a hole for 10.0.0.0/8 in the middle of the rightsubnet and came up with: code:
I hope none of you ever have to deal with IPsec. Vulture Culture fucked around with this message at 08:16 on Mar 6, 2014 |
# ? Mar 6, 2014 06:37 |
|
spankmeister posted:Nope, I had 0.8.0-3 and the latest is -4, one of the bugs fixed is: And it worked, the fedup update from testing did the trick.
|
# ? Mar 6, 2014 08:48 |
I'm currently running elementary OS on my school/work laptop, but after a few annoying freezes I'm looking to install something else. Since it is a laptop, should I bother with encrypting my home directory?
|
|
# ? Mar 6, 2014 18:20 |
|
calandryll posted:I'm currently running elementary OS on my school/work laptop, but after a few annoying freezes I'm looking to install something else. Since it is a laptop, should I bother with encrypting my home directory? I did crypto on an older laptop (core 2 duo/3GB) running mint 16 and noticed no slow downs honestly, go for it if you want.
|
# ? Mar 6, 2014 19:10 |
|
Is there any special way I should partition a system if I am mainly going to be using it for running virtual machines?
|
# ? Mar 7, 2014 02:44 |
|
Stealthgerbil posted:Is there any special way I should partition a system if I am mainly going to be using it for running virtual machines? With KVM? Make /var/lib/libvirt/images separate, preferably NFS, iSCSI, gluster, or ceph. Maybe OCFS2/GFS2 if you want a clustered filesystem on a SAN LUN
|
# ? Mar 7, 2014 04:14 |
|
Quick question: is there anything as a boot manager that allows access to its menu over the network? I'm dreaming of the possibility to login to my windows box, reboot it and remotely tell the boot manager to boot into linux / VMware ESXi, and vice versa. vvvv will look into the serial2usb method, thanks! kyuss fucked around with this message at 20:07 on Mar 8, 2014 |
# ? Mar 8, 2014 14:51 |
|
kyuss posted:Quick question: is there anything like a boot manager that allows access to its menu over the network? You can use cobbler or foreman if you don't mind reimaging. But what you probably actually want is ipmi, or dumping output from GRUB to a raspberry pi over serial
|
# ? Mar 8, 2014 18:07 |
|
Xik posted:I appreciate the input but those have everything pre-configured and pre-installed, including their own desktop environments. If I wanted that, I probably wouldn't be using Arch. Debian or Fedora, or something more stable and established would be my preferred option. I was in a similar situation and stumbled upon this which is pretty much an up to date version of the old Arch installer. I have no idea why it's not on the standard ISOs anymore, it worked great for me. Are there any more options if you want a pretty minimal install and a rolling release model? I gave Debian testing a try, but I prefer pacman to apt-get/aptitude and I don't really want to manage repositories for the three packages I can't get otherwise (yes, the AUR can be extremely spotty but it's usually fine when its just pulling stuff from git and packaging it ...).
|
# ? Mar 8, 2014 19:01 |
|
I have a main machine running Ubuntu 12.04 with a monitor resolution of 1920x1200. When I connect to this machine's VNC server (the built in Ubuntu vino server) on my laptop with 1440x900 resolution it looks terrible and is impossible to use because of the resolution difference. How can I set things up so when I VNC in I either get a new session at the same resolution as my laptop, or temporarily resize the session that's being connected to? I've searched around and there are a lot of conflicting and confusing ways to do it by hacking xorg configs and whatnot--is there any easy way to do this? I'm not married to VNC either, I just want a way to connect to my Ubuntu machine remotely that works with different resolutions without scaling.
mod sassinator fucked around with this message at 19:44 on Mar 8, 2014 |
# ? Mar 8, 2014 19:38 |
|
loose-fish posted:Are there any more options if you want a pretty minimal install and a rolling release model? If rolling release, pacman and AUR are requirements you're not leaving much room for alternatives . Minimal: Almost every major distro can pull this off (even Ubuntu last I checked). I've recently learned they pretty all have curses based installers to leave you at the same place. Rolling Release: I think you're pretty much boned if you are looking for a full rolling distro, unless you want to try Gentoo I guess? I haven't actually done a standard distro upgrade in a long time. The server that I run Debian on has hardware failures more often then Debian is updated so I just install the latest version when that happens. I can't imagine modern distro upgrades are any more of a hassle then when Arch decides to implement some major system breaking change though. Pacman: Have you played with Yum before? I recently had a play, it's quite nice. The output is probably the best out of all the package managers in my opinion. I don't think you'll find another distro which isn't based on Arch that has adopted Pacman. AUR: I have no idea, if you find something similar let me know. I actually like the AUR, but I manually check the pkgfiles, URLs and any other scripts before I build any package. I think using the AUR behind a non-interactive script or package manager like it's just another official repository is really dumb. mod sassinator posted:I'm not married to VNC either, I just want a way to connect to my Ubuntu machine remotely that works with different resolutions without scaling. This is probably a dumb question, but does what you do over VNC actually require a GUI? Perhaps what ever you are doing has a cli interface that you can control via ssh?
|
# ? Mar 9, 2014 02:23 |
|
Yeah I try to stick to the CLI, but sometimes I need to connect to the machine and use Eclipse/IntelliJ or other development stuff. I was secretly hoping I could start using a cheap Chromebook as a nice $200 web browser and occasional connect to other machine and get stuff done device.
|
# ? Mar 9, 2014 04:06 |
|
Xik posted:If rolling release, pacman and AUR are requirements you're not leaving much room for alternatives . What does a requirement of AUR entail? There's a whole bunch of rpmfusion type sites that have stuff not included in the RHEL/Fedora repositories. Fedora's "rolling release" version is Rawhide, but like you said it's probably a bad idea to use one when large sweeping changes hit the distribution fairly regularly with little user benefit. mod sassinator posted:Yeah I try to stick to the CLI, but sometimes I need to connect to the machine and use Eclipse/IntelliJ or other development stuff. I use emacs to edit files remotely, and it looks like eclipse can do the same thing. Maybe you could set up samba/nfs on your computer also?
|
# ? Mar 9, 2014 05:00 |
|
hifi posted:What does a requirement of AUR entail? There's a whole bunch of rpmfusion type sites that have stuff not included in the RHEL/Fedora repositories. I don't think those are the equivalent of AUR. Those are just 3rd party repositories that supply binary packages that aren't in the official repository right? Some people use AUR in a way that would make those seem like an alternative. But really, PKGBUILD files are just a recipe for building your own packages from upstream sources. The equivalent would probably be building your own rpm or deb packages from source and then installing that package with the standard package manager. e: I quickly looked up what's involved in building an rpm package, I think the direct equivalent of AUR would probably be a repository of spec files. Xik fucked around with this message at 05:40 on Mar 9, 2014 |
# ? Mar 9, 2014 05:26 |
|
They'd be "source rpms" or "srpms". Which you can get from any yum repo.
|
# ? Mar 9, 2014 06:32 |
|
Xik posted:I don't think those are the equivalent of AUR. Those are just 3rd party repositories that supply binary packages that aren't in the official repository right? ebuilds from Gentoo are a close equivalent. And yes, specfiles. But it's a worthless distinction, really. A better question would be what you think is so great about the AUR and why you can't live without it? PPAa are broadly equivalent, and both PPAs and the AUR came into existence because packages wanted weren't in the official repos because the author only provided RPMs. At this point, it's more appropriate for Fedora than Ubuntu, but eh. Why are PPAs a requirements when you can get GPG-signed packages with available srpms (which include specfiles) that you can read if you want to pretend you're diligent and you can really tell (hint: you can't. GPG is much safer than reading pkgbuilds or specfiles; I'd give 100:1 odds that I could set up an official-looking Google Code or github site for a project with a name that's so close to correct you'd trust it and get a pkgbuild which includes vulnerabilities accepted in a week)? Why is it required?
|
# ? Mar 9, 2014 07:25 |
|
evol262 posted:ebuilds from Gentoo are a close equivalent. And yes, specfiles. But it's a worthless distinction, really. A better question would be what you think is so great about the AUR and why you can't live without it? I could live without it. It's honestly not a massive deal breaker for me, but was for the person I was replying to. If I moved to another distro in the future for my main desktop (I'll probably switch at next hardware or massive system failure) then it would probably be easier for me to just find alternative applications for the things I can't get from official repos. Hell, most of the things probably don't even need to be installed via the system package manager (like emacs extensions). evol262 posted:Why are PPAs a requirements when you can get GPG-signed packages with available srpms (which include specfiles) that you can read if you want to pretend you're diligent and you can really tell (hint: you can't. GPG is much safer than reading pkgbuilds or specfiles; I'd give 100:1 odds that I could set up an official-looking Google Code or github site for a project with a name that's so close to correct you'd trust it and get a pkgbuild which includes vulnerabilities accepted in a week)? I don't really understand your argument, I'm obviously missing something. Where are these GPG-signed packages coming from? The whole point of this exercise is to install software that is not in the official repos. If upstream supplies GPG signed packages, that's perfect, problem solved (you already trust them by running their software). If they are signed by some random person operating a third-party repository, what difference does it make if they are even signed? The whole point is to not trust them. It's honestly not that difficult to be "diligent" when checking PKGBUILD files, although I'm sure 90% of users don't even bother checking them. Basically all the packages I have from AUR have a source on github. These are repos I already have "starred" and I know are the "official" repo. Things like magit for instance. The urls in terminal emacs have clickable links so I click on them, see it's the "real one", check what else the PKGBUILD script does and then when satisfied, build the package.
|
# ? Mar 9, 2014 08:13 |
|
mod sassinator posted:Yeah I try to stick to the CLI, but sometimes I need to connect to the machine and use Eclipse/IntelliJ or other development stuff. I use a chromebook for this purpose and I love it. I'm serving iPython Notebook and RStudio Server from my desktop, SSH and command-line crouton in a tab is great. I have Unity on the chroot but the only things I load it for are Zotero+PDFs and PyCharm. I set up a remote interpreter for PyCharm (which you should be able to do as well) but I've barely used it, the Chromebook's powerful enough to run all my tests at least. Highly recommended for Linux people as a tool to connect to servers/remote machines, plus it's surprisingly powerful on its own.
|
# ? Mar 9, 2014 08:33 |
|
SurgicalOntologist posted:I use a chromebook for this purpose and I love it. I'm serving iPython Notebook and RStudio Server from my desktop, SSH and command-line crouton in a tab is great. I have Unity on the chroot but the only things I load it for are Zotero+PDFs and PyCharm. I set up a remote interpreter for PyCharm (which you should be able to do as well) but I've barely used it, the Chromebook's powerful enough to run all my tests at least. Highly recommended for Linux people as a tool to connect to servers/remote machines, plus it's surprisingly powerful on its own. Nice! How much ram does your chromebook have and is crouton/chroot reasonably fast to use IDEs/editors? Looking at the Acer C720 but you can really only get the 2GB ram version now and am a little worried it will be slow. Can't beat $200 for a real 9 hour battery life tiny laptop though.
|
# ? Mar 9, 2014 08:55 |
|
Am I the only person that's having a terrible experience with running RedHat based distros as a guest OS? I have terrible performance issues with Fedora, while CentOS and ScientificLinux images aren't even detected as installation mediums. I've tried VirtualBox and VMWare workstation. I've tried a lot of Debian derivatives that just works out of the box... what's going on here?
|
# ? Mar 9, 2014 09:03 |
|
Xik posted:I don't really understand your argument, I'm obviously missing something. Where are these GPG-signed packages coming from? The whole point of this exercise is to install software that is not in the official repos. If upstream supplies GPG signed packages, that's perfect, problem solved (you already trust them by running their software). It's excellent for you that every AUR package you have installed is one in which you're familiar with upstream and every dependency, with their associated URLs, but it's not reasonable security practice, and clicking on a github link gives you no assurances whatsoever unless you're already familiar, since it would be easy to This discounts reading the pkgbuild every time it changes, and making sure there's corresponding release notes. The point of trusting a 3rd party repo is precisely the same as that of trusting thawte. EPEL polices their packages, and they're signed and countersigned. If you don't trust them, don't use it, but github at best as trustworthy and probably less so. This is the same argument by which people refused to trust the PA-RISC porting center. Obviously $somepackage self-signing is pretty useless. RPMfusion signing $somepackage is not. In short, the AUR is no better than downloading tarballs and ./configure && make && make install. Except it's wget-ing them and 99% of consumers don't look any closer, with mechanism of trust. It's convenient and it has a ton of packages, but using it "securely" is time consuming and questionably reliable. Basically, you should look at what it takes to get a package into the AUR (basically nothing, submit a tarball) versus RPMfusion (sponsorship ->review -> acceptance -> ssh keys -> builds with user certs which expire -> signed -> released) and ask yourself whether clicking a URL is really more trustworthy. I'm not trying to knock the AUR, but it's easier to get into the Arch [community] distro than rpmfusion.free, and the AUR is a cakewalk. It's a really poo poo comparison.
|
# ? Mar 9, 2014 09:26 |
|
evol262 posted:What I mean is that this argument is absurd, essentially. The point of signing rpmfusion or epel is that you trust the process of the repo, and aur (as a 3rd party analogue) has no equivalent. Yes, you have to trust the repository. That's my point. For debian, the alternative to things being in the official repos is adding a whole bunch of third party repositories to apt. How is them being signed helpful at all? It's like the equivalent of filling out the "publisher" field in Windows installers, worthless if you don't trust the source you download it from. evol262 posted:It's excellent for you that every AUR package you have installed is one in which you're familiar with upstream and every dependency, with their associated URLs, but it's not reasonable security practice, and clicking on a github link gives you no assurances whatsoever unless you're already familiar, since it would be easy to This discounts reading the pkgbuild every time it changes, and making sure there's corresponding release notes. I am familiar with every AUR package on my system. I've never needed to check the whole dependency tree because I've never had a dependency that isn't available in the official repository. I know this isn't common, which is why I said it's really dumb that folks use it just like another repository and use a third party tool to make it seamlessly integrate into pacman. evol262 posted:Basically, you should look at what it takes to get a package into the AUR (basically nothing, submit a tarball) versus RPMfusion (sponsorship ->review -> acceptance -> ssh keys -> builds with user certs which expire -> signed -> released) and ask yourself whether clicking a URL is really more trustworthy. I'm not trying to knock the AUR, but it's easier to get into the Arch [community] distro than rpmfusion.free, and the AUR is a cakewalk. It's a really poo poo comparison. I get what you're saying, but you don't have to trust AUR. AUR isn't a "repository" in the traditional sense. I agree with you that most people probably use it like one. I have already said that I think it's a really bad idea to do so. I think the problem is that you are assuming there is a trusted repository somewhere that holds the package you want. I don't know anything about rpmfusion, so maybe they are a trusted source within the rpm community and are comparable to trusting an official repo. What if the package you want isn't there either? evol262 posted:In short, the AUR is no better than downloading tarballs and ./configure && make && make install. Yes exactly! The PKGBUILD is literally a script for pulling a tarball from upstream and using it to compile a package. I'm not saying it's not. In fact, that's really what I meant when I say "alternative to AUR". Instead of a "trusted" binary repository as an alternative to AUR, it's equivalent is really just an accessible way to build packages from upstream.
|
# ? Mar 9, 2014 10:07 |
|
Xik posted:Rolling Release: I think you're pretty much boned if you are looking for a full rolling distro, unless you want to try Gentoo I guess? I haven't actually done a standard distro upgrade in a long time. The server that I run Debian on has hardware failures more often then Debian is updated so I just install the latest version when that happens. I can't imagine modern distro upgrades are any more of a hassle then when Arch decides to implement some major system breaking change though. Xik posted:Pacman: Have you played with Yum before? I recently had a play, it's quite nice. The output is probably the best out of all the package managers in my opinion. I don't think you'll find another distro which isn't based on Arch that has adopted Pacman. Xik posted:AUR: I have no idea, if you find something similar let me know. I actually like the AUR, but I manually check the pkgfiles, URLs and any other scripts before I build any package. I think using the AUR behind a non-interactive script or package manager like it's just another official repository is really dumb. I guess SlackBuilds are similar but yeah ... evol262 posted:ebuilds from Gentoo are a close equivalent. And yes, specfiles. But it's a worthless distinction, really. A better question would be what you think is so great about the AUR and why you can't live without it? For stuff I want to keep permanently a real repository would be preferred.
|
# ? Mar 9, 2014 12:23 |
|
mod sassinator posted:Nice! How much ram does your chromebook have and is crouton/chroot reasonably fast to use IDEs/editors? Looking at the Acer C720 but you can really only get the 2GB ram version now and am a little worried it will be slow. Can't beat $200 for a real 9 hour battery life tiny laptop though. I have the Acer C7 (the previous version), it came with 2GB but there's another slot so I upgraded to 6GB. I got it for $150 refurbished so the whole thing was under $200. It's impressively capable, I first installed Unity thinking I'd have to downgrade to something lighter but there's really no need. Sublime Text is snappy, PyCharm is sluggish when it's re-indexing but other than that it's fine. I haven't had to disable any of the live inspection features, though my projects aren't very large. If the 720 really gets 9 hours that's a great deal. Mine gets closer to 2, maybe 3 if I stick to ChromeOS.
|
# ? Mar 9, 2014 19:05 |
|
mod sassinator posted:I have a main machine running Ubuntu 12.04 with a monitor resolution of 1920x1200. When I connect to this machine's VNC server (the built in Ubuntu vino server) on my laptop with 1440x900 resolution it looks terrible and is impossible to use because of the resolution difference. How can I set things up so when I VNC in I either get a new session at the same resolution as my laptop, or temporarily resize the session that's being connected to? I've searched around and there are a lot of conflicting and confusing ways to do it by hacking xorg configs and whatnot--is there any easy way to do this? I'm not married to VNC either, I just want a way to connect to my Ubuntu machine remotely that works with different resolutions without scaling. Have you thought about using nxclient on the laptop and connecting to freenx on you main PC? Or even use plain old X forwarding?
|
# ? Mar 9, 2014 21:46 |
|
Riso posted:How does SELinux compare with AppArmour? Just gonna bump this because I'm curious as well.
|
# ? Mar 10, 2014 03:29 |
|
I am not a book posted:Just gonna bump this because I'm curious as well. Short answer: there are more options than these two, though they're the biggest (along with grsec). Lots more options in the kernel config. SELinux works on extended attributes with contexts for processes and labels for files acting something like an extremely granular traditional UNIX security model. Writing policies is hard. And SELinux is extremely strict by default. Copy files to /var/www as root? They'll probably have the wrong context and apache won't even be able to read them. AppArmor relies on per-path modules. /usr/sbin/rndc is not the same as /usr/local/bin/rndc, and a policy for one won't affect the other. As a corollary, AppArmor is pretty forgiving out of the box. There's no real notion of a "default" policy. Novell and Canonical are good about writing profiles, and AppArmor is easy to understand. A vuln in one will probably mean a vuln in the other. But it's reasonable to say that SElinux is more stringent, assuming there are no kernel vulnerabilities. SELinux assumes everything is bad and needs to follow rules. AppArmor says "only these apps need to be watched, and only let them do these things".
|
# ? Mar 10, 2014 05:32 |
|
My coworker wants to change all our servers from CentOS 6.4 to Ubuntu 12.04. I loving hate Ubuntu (probably irrationally). What are the standard arguments for using CentOS so I can convince my boss to veto this poo poo? They don't need to be pure objective Truth, I just want ammunition. Right now I have this: Our poo poo works as it is right now. It's a hell of a lot easier to create RPM packages of our software compared with debs. (or I'm incompetent. idk) We have working rpm repositories. It's slightly simpler to manage rpm repos. We're using Cobbler and cobbler doesn't handle ubuntu or deb repos well. (or I hosed something up when I attempted/failed to provision ubuntu) I prefer kickstarts over preseed. I prefer yum over apt. Our poo poo works as it is right now. The email I just got says the 2.6 kernel is causing "significant trouble" and I'm like what?? I kinda wish we could just hire someone who really knew what they were doing to handle this stuff but for now it's all me.
|
# ? Mar 12, 2014 02:08 |
|
Turn it around and ask what going to all this effort to rip out your existing working system will gain for the company. Ask for details the significant trouble the 2.6.xx kernel in CentOS is causing. I am betting they don't realise about the back ported fixes and features Redhat puts in to the RHEL kernels. The Kernel in RHEL/CentOS 6.4 is probably closer the the 3.12.xx kernel in many regards as a result.
|
# ? Mar 12, 2014 02:26 |
|
Illusive gently caress Man posted:My coworker wants to change all our servers from CentOS 6.4 to Ubuntu 12.04. I loving hate Ubuntu (probably irrationally). What are the standard arguments for using CentOS so I can convince my boss to veto this poo poo? They don't need to be pure objective Truth, I just want ammunition. Feel free to hire me for you linux guy. As such, he sounds like a fanboy of Ubuntu and not somebody who has a engineering head on his shoulders. If it isn't broken, don't fix it till it is! Leave that poo poo alone. Cent works perfectly fine for your use-case it sounds like, and Ubuntu has a ton of different config files that aren't in the same location or may not even exist that Cent does. Apache is handled differently as well, so if you have anything that uses .htaccess files, you may have to change them significantly. In summary: gently caress that guy. FlapYoJacks fucked around with this message at 02:55 on Mar 12, 2014 |
# ? Mar 12, 2014 02:53 |
|
Tell them to write up a formal proposal of why Ubuntu would benefit along with a migration plan including resource allocation and associated costs, timelines, plans to limit impact (like staggered migration strategy and rollback), differences between the two platforms that may exist and will require extra man hours. If they think its such a great idea then they should at least be able to put the work into planning and writing a proposal. Otherwise if thats too much work they probably aren't going to back up the actual work to change.
|
# ? Mar 12, 2014 04:14 |
|
ratbert90 posted:Feel free to hire me for you linux guy. This is a bad engineering mindset as well. You shouldn't leave CentOS alone for years with tons of unpatched security vulnerabilities because "it works".
|
# ? Mar 12, 2014 04:18 |
|
Hey, it worked for Sony!
|
# ? Mar 12, 2014 04:21 |
|
So, uh, I read the Op then skipped down to the last page. Questions/Issues regarding Desktop Enviroments that work better on legacy hardware: I have a ATI Radeon HD 4650 in an old system. This causes no ends of headaches, as apparently ATI stopped updating their propriatary linux driver ['fglrx'] for older cards. So I'm left with the open source alternative, 'radeon', which sucks balls for any shiny gaming and 3d stuff, or using the last known good propriatary driver, 'fglrx-legacy' which is unmaintained. Oh, and fglrx-legacy doesn't work with newer versions of X, which I understand to be the ONLY window manager in existence. This leaves me finding older distros such as Ubuntu 12.04.1 (but not 12.04.2, that's too new!) to get a sufficiently outdated version of X. 3d gaming SEEMS to work ok, but (naturally...) this breaks desktop speeds to unusably slow. This is kinda where I throw my hands up and ask for help -slash- consider Windows. Suggestions for legacy hardware Desktop Environments?
|
# ? Mar 12, 2014 04:41 |
|
|
# ? Jun 13, 2024 05:49 |
|
Suspicious Dish posted:This is a bad engineering mindset as well. You shouldn't leave CentOS alone for years with tons of unpatched security vulnerabilities because "it works". Did I say that? Theres a difference between updating a OS and changing to a completely different distro just because you don't like Cent.
|
# ? Mar 12, 2014 05:22 |