|
The Third Man posted:I know this is for a couple days back, but is the RHCSA exam something you can prepare for by yourself with a book and a couple computers? I am quite enjoying learning the system in my free time, and I think being certified would be a valuable career move, is there much of a demand for certified red hat admins without practical experience though? I've done basic help desk stuff at a very large public university for a few years now and now I'm focusing more on the A/V side of things, but it's just not terribly interesting. I am working my way through a RHCSA/RHCE study guide with just a pair of CentOS VMs. Unfortunately I haven't been able to get nested virtualization working (KVM inside of ESXi 5), but I did spend the better part of this weekend poking around trying to get it to work and learning a lot.
|
# ? May 7, 2012 08:25 |
|
|
# ? Jun 6, 2024 04:34 |
|
You know, I've tried to get into screen so many times, but I've never quite "gotten" it. I generally pop into a server to restart services, edit text files, or update from svn, so maybe it's not as useful to me since I don't do serious sysadmin work? Pretty much the only thing I find it useful for is to see what I was doing before, if I had to disconnect for some reason, but even then I can always just check ~/.bash_history. Am I missing something that makes it magical?
|
# ? May 7, 2012 19:09 |
|
SlightlyMadman posted:You know, I've tried to get into screen so many times, but I've never quite "gotten" it. I generally pop into a server to restart services, edit text files, or update from svn, so maybe it's not as useful to me since I don't do serious sysadmin work? Pretty much the only thing I find it useful for is to see what I was doing before, if I had to disconnect for some reason, but even then I can always just check ~/.bash_history. Am I missing something that makes it magical? If you have multiple admins or multiple connections, you can't always rely on .bash_history Screen is great for saving your state if you get disconnected. Nothing like running some script that takes 2 hours to complete, but you get disconnected and you have to start over. Or, leaving something running 24/7 (like an IRC client or torrent) and re-connecting to it at home, work, Starbucks, etc without having to restart or re-launch anything. The other thing screen is good for is programming. I can start screen on the server and have an editor open, a regular console so I can delete files or whatever, tail a log file... all at the same time and I can just switch between them without having to close/re-open files or programs. Not to mention you can do other stuff like split windows. Works better in many cases than opening up 20 SSH windows
|
# ? May 7, 2012 19:17 |
|
spankmeister posted:Screen can connect to serial ports? Huh, never knew. Yup. it's ridiculously easy if you're just using a default 9600,8,N,1: screen /dev/[YourDevice], then you can do all the useful screen stuff like disconnecting, and kill the session when you're done If you use something besides 9600,8,N,1 then you probably have to do some additional config. I haven't really tried anything else honestly.
|
# ? May 7, 2012 19:35 |
|
spankmeister posted:Screen can connect to serial ports? Huh, never knew. SlightlyMadman posted:I generally pop into a server to restart services, edit text files, or update from svn, so maybe it's not as useful to me since I don't do serious sysadmin work? The big win for screen is where you want to run a long-running job interactively (and thus, can't simply background and/or nohup it), where it's absolutely painful to restart the job in the event of a lost network connection. For me, it's usually running number-crunching scripts that can take many hours, if not days to run. Just start them in a screen session, then I can detach, go elsewhere, and plug in when I need to again.
|
# ? May 7, 2012 19:39 |
|
ExcessBLarg! posted:Serial concentrators usually make a bunch of serial ports available over some kind of network connection, so usually you're just ssh or telnetting into them. But yeah, you can run minicom inside a screen session too. Yeah I was aware of that but not that screen could talk to ttyS devices directly.
|
# ? May 7, 2012 19:41 |
|
Bob Morales posted:Screen is great for saving your state if you get disconnected. Nothing like running some script that takes 2 hours to complete, but you get disconnected and you have to start over. This is what I mostly use screen for (well, I actually prefer tmux but whatever, same thing). When you're compiling a hundred updates from ports it really sucks if you lose the connection halfway through. With screen it doesn't matter what happens, it will just keep on truckin and you can reconnect back at any time from any device without issue. I guess it's less of an issue now because compiling is so much faster, but back when compiling a single large package like gcc or samba took hours it really sucked to come in the next day and find that the ssh tunnel had died 10 minutes after you walked away from your desk.
|
# ? May 7, 2012 19:43 |
|
spankmeister posted:Yeah I was aware of that but not that screen could talk to ttyS devices directly. I mean let me qualify that I'm doing this on my Mac, but I don't imagine that it would be any different than screen running on a Linux box
|
# ? May 7, 2012 21:34 |
|
Okay, this may not be what normally goes in this thread, as it seems that you fellows are Linux gurus, and I have little knowledge of Linux, but here goes... Say I have been stuck with Windows pretty much forever, and am looking to switch over to Linux, how would I BEST go about doing this? I have dabbled with DamnSmallLinux, and found that while useful, it lacks a lot of desktop "creature comforts". I also tried to install Ubuntu using WUBI alongside my Windows installation, which I could not get to work properly. So I uninstalled the WUBI client thing and dual-booted Ubuntu, which, due to using the automatic partition resizing and formatting tool, when I gave it 100GB of unallocated space on my partition to format and create all the required partitions for a Linux install, caused my Windows install to go down in flames. My question is, should I just give up the Windows totally, blank my machine, and load Linux from the ground up, not knowing a lot about how Linux works? Any help would be appreciated.
|
# ? May 8, 2012 00:10 |
sh1gman posted:Okay, this may not be what normally goes in this thread, as it seems that you fellows are Linux gurus, and I have little knowledge of Linux, but here goes... Rather than messing around with partitioning and dual booting and all that jazz, I would just leave Windows installed as your "host OS", and spin up a virtual machine using VirtualBox. Then you can easily try out different distributions and play around with them without risk of screwing up your machine.
|
|
# ? May 8, 2012 00:22 |
|
fletcher posted:Rather than messing around with partitioning and dual booting and all that jazz, I would just leave Windows installed as your "host OS", and spin up a virtual machine using VirtualBox. Then you can easily try out different distributions and play around with them without risk of screwing up your machine. EDIT: After downloading VirtualBox and installing it and an old 10.04 LTS disk that I had, the VM boots fine, and is fairly responsive, thank you all for the help. orange juche fucked around with this message at 01:12 on May 8, 2012 |
# ? May 8, 2012 00:38 |
|
sh1gman posted:Hmmm... I may wind up doing this. Question, I am going to be doing this on an older Core2Duo machine, does VirtualBox have a lot of overhead, or do the VM's run at a decent speed? I know it won't be native speed, but am I going to be faced with a massive difference between actual machine power and the VM?
|
# ? May 8, 2012 01:13 |
sh1gman posted:Hmmm... I may wind up doing this. Yep as you have seen, the performance is quite good. As Keito mentioned, enable VTx support if your processor has it (it was a BIOS option on my laptop). Also, in the VirtualBox settings, go to Storage->SATA Controller-> Check the "Use host I/O cache". That helped boost performance a bit for me. (also enable the SSD option if you have an SSD). I can't really tell the difference in performance between my host & guest OS for 99% of the things I do with it.
|
|
# ? May 8, 2012 01:37 |
|
Martytoof posted:I mean let me qualify that I'm doing this on my Mac, but I don't imagine that it would be any different than screen running on a Linux box
|
# ? May 8, 2012 02:37 |
|
I prefer cu on bsd to minicom but I don't know if there's a linux version. I generally have 4-5 screen's running on a bunch of different machines so I can keep things to one task per screen session. That way the history for each screen is dedicated to whatever I'm doing. Usually 1-2 windows for writing code and a third for man pages, too. Anyone have a good setup for vim that does something like Visual Studio's function parameter completion? I started using autoproto but it's a little janky and only works for C.
|
# ? May 8, 2012 03:12 |
|
For serial consoles, I usually skip screen and minicom and use cu.code:
|
# ? May 8, 2012 03:13 |
|
I'm using the latest version of Linux Mint Debian Edition on my Thinkpad (Intel 3945 b/g card, I believe) and not only is the wireless extremely slow to connect, but it disconnects several times per hour. I don't believe it's a signal issue, as it's constantly at 100% and I have a laptop next to it (running Windows) that is 100% stable. Oh, and it worked in Ubuntu 12.04 flawlessly. Any idea where to start with this?
|
# ? May 8, 2012 19:25 |
|
BoyBlunder posted:I'm using the latest version of Linux Mint Debian Edition on my Thinkpad (Intel 3945 b/g card, I believe) and not only is the wireless extremely slow to connect, but it disconnects several times per hour. Do the log files (in /var) show anything related to the wifi card when it happens?
|
# ? May 8, 2012 19:32 |
|
I'm having some issues with LDAP. I'm using LDAP as the identity provider with Kerberos for authentication. Under Ubuntu, I have it configured and working. I can query users (LDAP) and authenticate as users (Kerberos). Under CentOS 6.2, the getent passwd command isn't working as expected. Under Ubuntu, this command lists all local & all LDAP users: code:
code:
I can log in just find using LDAP/Kerberos, it just seems like the "getent passwd" command isn't working. What should I check? Trying to Google for "getent passwd" not working gives me TONS of pages where LDAP in general isn't working. I can query LDAP just fine. EDIT loving hell. *Every* time I spend hours on something, then post here asking for help, I always seem to find the answer IMMEDIATELY after. I had "enumeration = true" instead of "enumerate = true" in my sssd.conf file (what tells it to list all). Xenomorph fucked around with this message at 20:03 on May 8, 2012 |
# ? May 8, 2012 19:55 |
|
I'm using Debian testing and, lately, switching from X back to a text console will occasionally cause a system freeze. I've tried rolling back: linux-image-3.2.0-2-686-pae_3.2.9-1 -> linux-image-3.2.0-1-686-pae_3.2.6-1 xserver-xorg_1:7.6+12 -> xserver-xorg_1:7.5+8+squeeze xserver-xorg-video-radeon_1:6.14.4-2 -> xserver-xorg-video-radeon_1:6.13.1-2+squeeze1 and associated dependencies, and rebooting and restarting X, without any luck. Does anyone know where to even start with diagnosing this problem? Help!
|
# ? May 8, 2012 22:07 |
|
I have another question, on RHEL/CentOS, viewing a user's info ("id username") results in context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 as the output if SELinux is enabled. When I disable SELinux, the output is normal (it lists just their UID and GID). Anyone know why I see "context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023" or how to fix it?
|
# ? May 8, 2012 23:46 |
|
Xenomorph posted:I have another question, on RHEL/CentOS, viewing a user's info ("id username") results in context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 as the output if SELinux is enabled. Is your user named -Z?
|
# ? May 8, 2012 23:57 |
|
SELinux feels like some kind of magic to me. Is there a good 101-style primer? Obviously I can just google myself, but I don't know if there's a recommended go-to for this sort of thing. I think it's the one aspect of Linux that I can say I have literally no understanding of.
|
# ? May 9, 2012 00:00 |
|
It's more trouble than its worth and I usually just disable it completely. I think the added value of SELinux is very low unless you are literally working for the NSA or some poo poo but those guys came up with the whole deal so. (This will probably get me some flak...)
|
# ? May 9, 2012 00:27 |
|
I saw on the last page someone had trouble with Ubuntu destroying his partitions when he did a cd/usb install? Is this a common occurrence? I have 100 gigs of free space in between my c: and d: partitions for Windows and my intentions were to put linux there, but I'm not going to bother if it will hose my Windows and data partitions. I'm not really interested in using Wubi.
|
# ? May 9, 2012 01:03 |
|
membranoid posted:I saw on the last page someone had trouble with Ubuntu destroying his partitions when he did a cd/usb install? Is this a common occurrence? You're not going to lose your partitions unless you explicitly tell the installer to nuke then. Ubuntu does however by default overwrite the MBR with its own bootloader, which is probably what lead that person to somehow believe the windows install was gone.
|
# ? May 9, 2012 01:13 |
|
So is there an option in the install that allows dualboot or will i have to mess around with grub to get windows available again? I haven't messed with linux in years other than various live cds. \/\/ Thanks! membranoid fucked around with this message at 01:21 on May 9, 2012 |
# ? May 9, 2012 01:16 |
|
membranoid posted:So is there an option in the install that allows dualboot or will i have to mess around with grub to get windows available again? If you've got windows installed it should be detected and added to the boot list automatically.
|
# ? May 9, 2012 01:20 |
|
Misogynist posted:Something's weird with your configuration. You're viewing the output of id -Z, but that invocation won't allow you to specify another user's username. I can log in as any network user, type just "id" (nothing else), and I get "context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023". This is a clean install of CentOS 6.2 where I've _only_ added my LDAP/Kerberos information. SELinux was causing weird stuff under CentOS 6.0 for me as well. After login, I'd see this something like this telling me their home folder didn't exist, when it did: code:
That was fixed by changing SELinux from "enforcing" to "permissive". The "context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023" thing is fixed by changing SELinux to simply "disabled".
|
# ? May 9, 2012 03:12 |
|
I think id just shows you the current SELinux context of that user and is intended behavior.
|
# ? May 9, 2012 08:17 |
|
I have been running out of memory recently on my work computer running Debian. The main culprit is Chrome because each tab can gobble up a ton of memory, but even after I quit Chrome, I'm using about 2GB or RAM. When I look at the list of processes, the memory used by all processes doesn't come anywhere close to 2GB. Is this a sign of a memory leak or something?
|
# ? May 9, 2012 13:36 |
|
Modern Pragmatist posted:I have been running out of memory recently on my work computer running Debian. The main culprit is Chrome because each tab can gobble up a ton of memory, but even after I quit Chrome, I'm using about 2GB or RAM. When I look at the list of processes, the memory used by all processes doesn't come anywhere close to 2GB. Is this a sign of a memory leak or something? If you run cat /proc/meminfo what does it give you for 'buffers' and 'cached'? That's probably where your 'missing' memory is.
|
# ? May 9, 2012 14:01 |
|
Bob Morales posted:If you run cat /proc/meminfo what does it give you for 'buffers' and 'cached'? That's probably where your 'missing' memory is. You can also see this in realtime with top.
|
# ? May 9, 2012 14:16 |
|
Bob Morales posted:If you run cat /proc/meminfo what does it give you for 'buffers' and 'cached'? That's probably where your 'missing' memory is. Good call: Buffers: 582864 kB Cached: 1153076 kB Cleared the cached RAM and it dropped down as expected. So cached memory shouldn't limit my overall memory availability, right?
|
# ? May 9, 2012 15:36 |
|
Modern Pragmatist posted:Good call: The theory is the kernel should release it if an app wants it. It's just caching everything it reads because in theory it could use it again, and it'd be faster the second time around.
|
# ? May 9, 2012 16:22 |
|
Modern Pragmatist posted:Good call: Right. "Cached memory" is actually memory used as cache; data read from storage is cached in memory for fast access later if it's needed again, and data written to disk is buffered in memory and then written out more slowly in the background, resulting in (in some cases dramatic) performance improvements for anything doing a lot of disk IO. There's no reason to clear it by hand; if a program needs more memory, the OS will discard cache blocks as needed to make room.
|
# ? May 9, 2012 17:19 |
|
http://www.linuxatemyram.com/ On a side note, if it's just killing you that you have a 100MB pagefile but you have 6 GB of free RAM, you can run this: code:
|
# ? May 9, 2012 18:59 |
|
Thanks to both of you for your help.
|
# ? May 9, 2012 19:01 |
|
spankmeister posted:I think id just shows you the current SELinux context of that user and is intended behavior. So you're saying that displaying "context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023" behind their UID/GID information is the intended (correct) behavior?
|
# ? May 9, 2012 19:54 |
|
|
# ? Jun 6, 2024 04:34 |
|
That's how my CentOS system displays it: [myuser@myhost ~]$ id uid=500(myuser) gid=500(myuser) groups=500(myuser) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 This was a user created by the installer, so if it's NOT the intended behaviour then CentOS sure broke something out of the box. some kinda jackal fucked around with this message at 20:02 on May 9, 2012 |
# ? May 9, 2012 19:59 |