|
It might be printing to stderr instead of stdout - try with &> instead of just > . (I'm assuming this is bash or bash-like.)
|
# ¿ Jul 6, 2011 17:14 |
|
|
# ¿ May 7, 2024 22:32 |
|
Basically, you have one input channel (stdin), but two output channels (stdout and stderr). The idea is that programs can output the actual data on stdout, while status info, warnings, errors and the like go to stderr. Then, you can do "tool > file" or "tool | othertool" and the data goes in the file or into the next program, but the other stuff is printed on the console. Sometimes, programs have weird ideas about what should go where, though.
|
# ¿ Jul 6, 2011 18:06 |
|
Bob Morales posted:You could re-direct stderr to a log file or something. Basically, they're both just file handles a program can write to. When the shell starts a process, that process starts with stdout, stderr and stdin set by the shell. If you just run "tool", nothing special, then stdin is the keyboard, and stdout+stderr are both connected to the console output. The different things you can do in the shell can modify this: Using > will connect stdout to the given file. 2> connects stderr. &> connects both. If you do "tool1 | tool2", then stdout (but not stderr) from tool1 is connected to stdin in tool2, and I believe &| will connect both. Put simpler, they aren't inherently different, it's just that they're typically used for different things - and the shell will join them together unless you tell it to separate them somehow. edit: An example might be useful. I've got a python script lying around that will read two large text files, and extract some information from the combination. It writes the results to stdout, and progress info to stderr. In practice, that means I can run "mytool file1 file2 > results", getting both a clean data file and progress/debug information on the console while it slowly churns its way through. I could have swapped stdout and stderr in the program and done "mytool file1 file2 2> results" with the same result, but that's just being different for the sake of it. Computer viking fucked around with this message at 20:30 on Jul 6, 2011 |
# ¿ Jul 6, 2011 20:22 |
|
Bob Morales posted:No, all I can do is ssh in as a regular user. If you can neither su, sudo or remote-console, I believe your watercraft has run low on paddles...
|
# ¿ Jul 9, 2011 17:23 |
|
Bob Morales posted:What are you guys running that aren't using Fedora/Ubuntu? One of my laptops runs debian testing, and my workstation (at work, ofc) runs FreeBSD.
|
# ¿ Jul 11, 2011 14:27 |
|
angrytech posted:I'm curious here, what sort of problems is it possible to have with apt? It's built like a god damned tank. Well, I've had ubuntu upgrades that left me without working wireless, and at one point unable to log into KDE4. (And ubuntu distupgrades are a bit hit/miss, even using their custom tool for it.) Apt itself is usually fine, though.
|
# ¿ Jul 11, 2011 20:17 |
|
Bob Morales posted:They other things is the fact that it's BSD and not Linux, so a lot of system utilities are very different and won't transfer over. Eh, it's not that bad. The OS X userland resembles a reasonably current FreeBSD, and that's not a very difficult transition.
|
# ¿ Jul 12, 2011 16:24 |
|
Mh, true. I kind of miss a few FreeBSD monitoring and info tools on linux machines, and I assume the same happens the other way. However, a pure C project shouldn't miss those overly much ... though the config/make framework certainly might.
|
# ¿ Jul 13, 2011 01:14 |
|
ShoulderDaemon posted:Blocking all incoming connections is pretty standard policy for a lot of networks, especially for places like university dorms. Unless you have some reason to believe otherwise, this is by far the most likely answer. Mine had an unusual solution there: They allowed incoming on a bunch of normal ports, e.g. 22 TCP. However, they blocked all UDP traffic, no exceptions. I'm not entirely sure how they concluded that that was a good solution.
|
# ¿ Sep 12, 2011 12:03 |
|
serling posted:I'm running a Linux environment and have currently introduced a new Ubuntu 10.04 box into the system. It's connected to a NIS database on a RHEL server, and authenticates its users through Kerberos (Windows side). Both these services are up and running and work with the other machines in our server park. It sounds familiar to me in that I have roughly the same issue: Samba, using AD for authentication, and some random selection of users are rejected for no obvious reasons. I haven't found any relevant log entries so far, but I also haven't looked too hard at it yet. And of course, the only users with issues are actual other people, my admin and test accounts both work perfectly. Argh.
|
# ¿ Jan 22, 2012 17:12 |
|
i barely GNU her! posted:I saw that on a Solaris box I used to administer, it turned out that the winbind database was being corrupted on save, but that bug was IIRC only specific to big-endian 64-bit systems (e.g. SPARC) and was fixed in 3.4.4. More to the point, actually enabling winbind in pam.d/samba helps. It worked with some users because I had already tested those over ssh (which was set up for winbind), I believe. I feel somewhat stupid now.
|
# ¿ Jan 23, 2012 21:13 |
|
Gorilla Salsa posted:The only apps I really need would be a calculator, maybe a notepad, a very simple MS-Paint style image editor, and Google Chrome. I understand the second point. I suppose I just worry that by starting off with all those programs and removing them, there will be residual leftovers (dependencies?) after their removal, so I'd be better off starting from scratch. Should I try slackware? I'm bad at Linux, but not inept. I guess we should mention that the officially supported way to develop android apps seems to be with Eclipse and some plugins, so you might have to add that to your software list.
|
# ¿ Feb 2, 2012 01:31 |
|
waffle iron posted:Is that easier to use than TestDisk? I've used it a couple times and it a bit tough to use without having the documentation up. It's not too bad. First, "kpartx -a -d imagefile". The output is a list of device names that live somewhere in /dev , I think mine were in /dev/loop/ . The devices correspond to the partitions, so you should be able to fsck and mount one of them. If that doesn't work, you can first check what partitions it sees with -l . If it doesn't find any, you will have to re-create one: I think gpart will work on image files.
|
# ¿ Feb 3, 2012 12:09 |
|
diremonk posted:This is kind of a odd/dumb question, but is running ubuntu off a flash drive possible/feasible? Every page I've been to only deals with running a live version off the drive instead of a full install which is what I would rather have. Uhm - if your reasoning is that you have a lot of free space, why would you then want to run off a USB stick instead of a partition on the HD? Anyway, I believe you can boot off an installer CD (or another USB stick), then choose the USB stick as your install target.
|
# ¿ Feb 6, 2012 16:22 |
|
diremonk posted:The live version seemed to work fine, but I'm not sure I like the idea of running a system 24x7 that I can't patch or update without creating a brand new install. Or am I wrong about that too? I've dist-upgraded live images, and it can go both ways. It probably depends on the filesystem layout of the particular live image: It will need space in places live images usually don't, so if it's set up with /home being the only place with free space (or an rw mount), you'll have issues. Installing to the stick normally seems like a reasonable choice for continued usage.
|
# ¿ Feb 6, 2012 17:42 |
|
ruttivendolo posted:Hoping this is the right thread: That's one if those things that are worth a thread of it's own (and I suspect I might have overlooked one). Anyway. Ubuntu is a common choice. Most things just work, and there's a heap of documentation and forums especially for it. On the downside, the default UI (Unity) is a bit odd and has gotten a very divided reaction. It's not much work to install another UI, and there are other releases you can get that defaults to another one (e.g. Kubuntu, which defaults to KDE).
|
# ¿ Feb 16, 2012 12:57 |
|
Bob Morales posted:There once was? There was a period when it was arguably the easiest way to run bleeding-edge versions, if you wanted to try whatever new and exciting things were around. (These days I guess you'd just use a bunch of PPAs and hope for the best.)
|
# ¿ Feb 18, 2012 12:35 |
|
fletcher posted:I'm looking for an SSH connection manager for ubuntu 10.04 - any recommendations? I've been using Putty since I have been using that on Windows. Does it have to be graphical? Otherwise, you can do a lot with the config file. The manual is available with "man ssh_config", or I found a guide that could be useful here.
|
# ¿ Feb 22, 2012 21:08 |
|
Zom Aur posted:I think there's a putty version for linux, but I've never used it myself. Hah, I had completely forgotten that - I've even got it installed here. Seems to work very much like in windows.
|
# ¿ Feb 22, 2012 21:40 |
|
That sounds like a good idea to me. SuSE is about as different as you can get within modern linuxes with custom graphical tools, so yeah, it sounds like a good start. (Besides, I kind of like OpenSuSE. They tend to have good KDE defaults). The redhat/fedora family would make a lot of sense, too. If you want something more exotic later, you could always play with FreeBSD - most of the programs are the same, but a lot of administrative things are not. It's an interesting system for getting a wider perspective (and an appreciation for automated tools). I'm very fond of it, but I wouldn't recommended it to anyone that haven't chosen it on their own.
|
# ¿ Feb 22, 2012 22:28 |
|
Social Animal posted:Wow so it's for real? I'll admit I'm a little jealous.. please do tell how it is when it arrives. Has that really been in doubt the last half year or so? And yes, I look forward to the reviews. Might grab one myself when the supply stabilizes.
|
# ¿ Mar 1, 2012 12:18 |
|
Bob Morales posted:Doesn't that really mean, "If I am out of memory, any and all programs that ask for more will crash and burn?" Edit: not to say that you're not also risking random programs failing with E_NOMEM.
|
# ¿ Mar 1, 2012 20:05 |
|
Bob Morales posted:What prevented Objective-C and OpenStep from taking over Linux GUI software development? Was OpenStep not free and GNUstep was never finished/stable? It succeeded on OS X (but then again it's the only option) At a guess, it was seen as an esoteric language while C++ was more ... established?
|
# ¿ Mar 6, 2012 21:48 |
|
Mr Darcy posted:I got a cheap rear end netbook recently, and as it had a bloated windows install on it I thought I'd be cunning and put on Ubuntu 11.10 a) If at all possible, use the package system instead (try apt-cache search someGame to see if it exists, then sudo apt-get install someGame if it does) b) If you do have to compile it from source: Open a terminal, cd to the right folder. Run ./configure ,which will definitely complain about missing libraries. Find and install those (ref. the apt-cache/apt-get method above), if there's a variant called -dev you want that one. Try again. Repeat until it's happy, then run make. If that works, you'll get an executable in the same directory, typically called the same as the game. Try running that with ./whateverName .
|
# ¿ Mar 28, 2012 12:48 |
|
Keito posted:Come on people, don't scare the poor guy away with tales of terminals and compilation from the get-go. Just use the Add/Remove interface in the Applications menu. See this page from Ubuntu's documentation for more information. Oh yes, you're right - I read that as "nethack-alikes" and though he had gotten tarballs of some obscure variants. I'd like to say that my plan a) was at least reasonable, but using the GUI tools is better.
|
# ¿ Mar 28, 2012 13:34 |
|
etcetera08 posted:Anyone have suggestions for a good Quake-style drop-down terminal? The built in terminal in Ubuntu is okay for light use but I miss hotkey drop-down like I can get with iTerm in OS X. (Oh god how I miss iTerm...)
|
# ¿ Apr 4, 2012 22:22 |
|
Kaluza-Klein posted:This is an odd question, but how can I be 100% sure that the system is finished writing to external storage before I unmount it/remove it? In theory, you should be able to call sync, and when it finishes all buffers should be written to permanent storage. Practically speaking, it's more of a hint that an order, so... it might or might not actually work in your case. Edit: As the manpage says, sync schedules the writes, but can return before they finish. Given that the issue in your case seems to be huge buffers somewhere in the middle that make writes seem to finish long before anything is permanently done, well... try, but I'm pessimistic. Computer viking fucked around with this message at 15:28 on Jun 1, 2012 |
# ¿ Jun 1, 2012 15:26 |
|
Social Animal posted:Just installed xterm on my server for kicks. I will say it is pretty surreal to run Firefox on my server through SSH. Does anyone use xterm for any serious purpose? It looks like a very ancient and outdated thing, not to mention slow. Xterm is sort of the terminal of last resort - it's light, fast, and doesn't depend on anything noteworthy. Besides, it's actually updated fairly often, so it works quite well for most terminal things even if the interface is clunky. I'm not sure what it has to do with X-forwarding, though?
|
# ¿ Jun 8, 2012 14:26 |
|
evol262 posted:If you worry about this, all you really need is xauth and xhost. And practically speaking, all you need to do is ssh -X. (or -Y, depending on settings. And you might want -C for compression, if the link has limited bandwidth.)
|
# ¿ Jun 10, 2012 10:48 |
|
If all your time fields are the same length, you can also use uniq -w, if I remember correctly and it prints the last duplicate. It might be the first, so do check.
Computer viking fucked around with this message at 16:13 on Jun 27, 2012 |
# ¿ Jun 27, 2012 16:11 |
|
ToxicFrog posted:Per the man page: "matching lines are merged to the first occurrence." That's not in the manages on the ancient RHEL machine I checked on. But yeah, rev and sort and/or uniq sounds fine to me, and is a fairly easy to understand. Maybe a bit suboptimal on huge files (how efficient is rev?), but eh.
|
# ¿ Jun 28, 2012 17:54 |
|
evol262 posted:Thinkpad yoga. Dell XPS You can even buy an XPS preloaded with Ubuntu, I think. Even if you plan to reinstall it with something else that's a good sign of sorts.
|
# ¿ Mar 3, 2016 12:20 |
|
Deduplication and compression are the other things that can give similar results - du returning on-disk sizes keeps tripping me up on the ZFS file servers at work. It's admittedly right there in the name of the utility, but still.
|
# ¿ Jan 15, 2019 02:24 |
|
Alternatively, you could try a different clocksource (which is what generates the timer interrupts). I think the RHEL 7 guide here should work - just try all the available ones and see if it changes anything.
|
# ¿ Jan 15, 2019 17:05 |
|
Just for clarity, the difference between a softlink and a hardlink is that a hardlink isn't really a link at all: it creates another equally real name for the same underlying file. On the other hand, a softlink is more like a Windows shortcut: it's basically a magic file that contains a path. This leads to some differences. You can create a softlink to anywhere, including files on other file systems, or nonexistent ones, or even entirely invalid paths. On the other hand, hardlinks only work within one file system, and you can only make one if the target exists and is a file (as opposed to, say, a directory). If you delete a softlink, you just delete the link. If you delete the file it points to, you now have a link that points to nothing, much like a broken Windows shortcut. On the other hand, deleting a hardlink or the original is equivalent : both deletes one of the names of the file. (There no way to distinguish the "original" name). Every file has a "link count" of how many names it has, and it lives on as long as the count is above zero. (Holding a file open in a program also ups the link count, so if you delete the last name of an open file, it lives on anonymously until it's closed.) Also, see the funkiness mentioned above about what happens when you replace hardlinked files. As for "how", ln basically works like cp (the arguments are in the same order, something I've never been happy about how the manage explains). Use ln -s if you want to make a softlink; the default is hardlinks. Computer viking fucked around with this message at 00:40 on Mar 1, 2019 |
# ¿ Mar 1, 2019 00:31 |
|
You can also fire up sshd and use putty - it's not ideal, but definitely a better terminal emulator than powershell.
|
# ¿ Mar 19, 2019 23:32 |
|
For DISPLAY, it helps to remember that X is a server, not entirely unlike sshd: programs connect and talk to it to show graphics and receive input - and you can have multiple servers on one machine, or even connect over a network. The number is an abstract "which server", :0 is "the first one". If you connect over TCP, it determines the port number (:0 and :1 map to 6000 and 6001), while if you use a Unix socket it's the difference between /tmp/.X11-unix/X0 or /tmp/.X11-unix/X1. (The connection method is determined by what you set DISPLAY to: A bare :0 uses a socket, while 1.2.3.4:0 would use TCP). As mentioned, it will be set correctly by something in your graphical login process. I'm not sure exactly whose responsibility it is to set DISPLAY - the login manager, perhaps? Either way, everything you start inside X will inherit an environment where it's set correctly. Cron does not; it's started outside and perhaps even before X is up.
|
# ¿ Apr 7, 2019 23:20 |
|
Hollow Talk posted:Everything newer than the Core 2 supports x86-64, as do some older things. This should be fine. All Core 2 as well, I believe; the phase-in was at the end of the Pentium 4. Those late socket 775 Pentium4 machines are sort of neat - they're dog slow singlecore+HT things, but they are just on the edge of modern: 64-bit, PCIe, SATA, DDR2 or 3. I've got one at work running Win7 64-bit as an experiment, with 4GB RAM and an SSD it's borderline usable. (Dropping in a Core2 Quad would probably do wonders, though). On the other end, the low end/early boards in the range are the last ones that will run Windows 98 with official drivers and feel somewhat period appropriate.
|
# ¿ Apr 10, 2019 18:34 |
|
waffle iron posted:Am I the only weirdo that uses 'most' as a pager? There's at least two of us.
|
# ¿ Apr 13, 2019 18:03 |
|
|
# ¿ May 7, 2024 22:32 |
|
waffle iron posted:Anti-aliasing and sub-pixel font rendering was useful for LCD screens with a low PPI/DPI. These days it's mostly not needed. Unless you're using a desktop PC with a not-tiny, non-4k screen, which is still an incredibly common scenario. Non-HiDPI laptops (e.g. 1920x1080 15") are also far from rare.
|
# ¿ Apr 15, 2019 09:51 |