Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Computer viking
May 30, 2011
Now with less breakage.

It might be printing to stderr instead of stdout - try with &> instead of just > .
(I'm assuming this is bash or bash-like.)

Adbot
ADBOT LOVES YOU

Computer viking
May 30, 2011
Now with less breakage.

Basically, you have one input channel (stdin), but two output channels (stdout and stderr). The idea is that programs can output the actual data on stdout, while status info, warnings, errors and the like go to stderr. Then, you can do "tool > file" or "tool | othertool" and the data goes in the file or into the next program, but the other stuff is printed on the console.

Sometimes, programs have weird ideas about what should go where, though.

Computer viking
May 30, 2011
Now with less breakage.

Bob Morales posted:

You could re-direct stderr to a log file or something.
Or the opposite (output to file, error to console) - either way.

Basically, they're both just file handles a program can write to. When the shell starts a process, that process starts with stdout, stderr and stdin set by the shell. If you just run "tool", nothing special, then stdin is the keyboard, and stdout+stderr are both connected to the console output. The different things you can do in the shell can modify this: Using > will connect stdout to the given file. 2> connects stderr. &> connects both. If you do "tool1 | tool2", then stdout (but not stderr) from tool1 is connected to stdin in tool2, and I believe &| will connect both.

Put simpler, they aren't inherently different, it's just that they're typically used for different things - and the shell will join them together unless you tell it to separate them somehow.


edit: An example might be useful. I've got a python script lying around that will read two large text files, and extract some information from the combination. It writes the results to stdout, and progress info to stderr. In practice, that means I can run "mytool file1 file2 > results", getting both a clean data file and progress/debug information on the console while it slowly churns its way through. I could have swapped stdout and stderr in the program and done "mytool file1 file2 2> results" with the same result, but that's just being different for the sake of it.

Computer viking fucked around with this message at 20:30 on Jul 6, 2011

Computer viking
May 30, 2011
Now with less breakage.

Bob Morales posted:

No, all I can do is ssh in as a regular user.

If you can neither su, sudo or remote-console, I believe your watercraft has run low on paddles...

Computer viking
May 30, 2011
Now with less breakage.

Bob Morales posted:

What are you guys running that aren't using Fedora/Ubuntu?

Anyone out there using Bodhi, is Arch as popular as it seems?

One of my laptops runs debian testing, and my workstation (at work, ofc) runs FreeBSD.

Computer viking
May 30, 2011
Now with less breakage.

angrytech posted:

I'm curious here, what sort of problems is it possible to have with apt? It's built like a god damned tank.
Granted, if someone starts adding ppas like a drunken orangutan then yeah they'll have issues.

Well, I've had ubuntu upgrades that left me without working wireless, and at one point unable to log into KDE4. (And ubuntu distupgrades are a bit hit/miss, even using their custom tool for it.)

Apt itself is usually fine, though.

Computer viking
May 30, 2011
Now with less breakage.

Bob Morales posted:

They other things is the fact that it's BSD and not Linux, so a lot of system utilities are very different and won't transfer over.

In theory you could use OpenStep to build programs with a similar UI but it's still just way too different.

Eh, it's not that bad. The OS X userland resembles a reasonably current FreeBSD, and that's not a very difficult transition.

Computer viking
May 30, 2011
Now with less breakage.

Mh, true. I kind of miss a few FreeBSD monitoring and info tools on linux machines, and I assume the same happens the other way. However, a pure C project shouldn't miss those overly much ... though the config/make framework certainly might.

Computer viking
May 30, 2011
Now with less breakage.

ShoulderDaemon posted:

Blocking all incoming connections is pretty standard policy for a lot of networks, especially for places like university dorms. Unless you have some reason to believe otherwise, this is by far the most likely answer.

Mine had an unusual solution there: They allowed incoming on a bunch of normal ports, e.g. 22 TCP. However, they blocked all UDP traffic, no exceptions. I'm not entirely sure how they concluded that that was a good solution.

Computer viking
May 30, 2011
Now with less breakage.

serling posted:

I'm running a Linux environment and have currently introduced a new Ubuntu 10.04 box into the system. It's connected to a NIS database on a RHEL server, and authenticates its users through Kerberos (Windows side). Both these services are up and running and work with the other machines in our server park.

However, when I try to log into our new box, some specific users are denied access. Kinit grants them a ticket when I supply their credentials in other contexts, but once I try to log them on the system, they're instantly thrown out. I can't seem to find a common link between them either, which is frustrating. It seems to stem from something locally on the box, even though I'm not using any host.allow/deny stuff.

It sounds familiar to me in that I have roughly the same issue: Samba, using AD for authentication, and some random selection of users are rejected for no obvious reasons. I haven't found any relevant log entries so far, but I also haven't looked too hard at it yet.

And of course, the only users with issues are actual other people, my admin and test accounts both work perfectly. Argh.

Computer viking
May 30, 2011
Now with less breakage.

i barely GNU her! posted:

I saw that on a Solaris box I used to administer, it turned out that the winbind database was being corrupted on save, but that bug was IIRC only specific to big-endian 64-bit systems (e.g. SPARC) and was fixed in 3.4.4.

If you're using winbind, have you checked that your idmap configuration is working and can create a map for every user? There's a tool that can dump the winbind cache tdb file which may be useful.

More to the point, actually enabling winbind in pam.d/samba helps. It worked with some users because I had already tested those over ssh (which was set up for winbind), I believe.

I feel somewhat stupid now.

Computer viking
May 30, 2011
Now with less breakage.

Gorilla Salsa posted:

The only apps I really need would be a calculator, maybe a notepad, a very simple MS-Paint style image editor, and Google Chrome. I understand the second point. I suppose I just worry that by starting off with all those programs and removing them, there will be residual leftovers (dependencies?) after their removal, so I'd be better off starting from scratch. Should I try slackware? I'm bad at Linux, but not inept.

vvvvv Awesome, thank you. vvvvv

I guess we should mention that the officially supported way to develop android apps seems to be with Eclipse and some plugins, so you might have to add that to your software list. :)

Computer viking
May 30, 2011
Now with less breakage.

waffle iron posted:

Is that easier to use than TestDisk? I've used it a couple times and it a bit tough to use without having the documentation up.

It's not too bad. First, "kpartx -a -d imagefile". The output is a list of device names that live somewhere in /dev , I think mine were in /dev/loop/ . The devices correspond to the partitions, so you should be able to fsck and mount one of them.
If that doesn't work, you can first check what partitions it sees with -l . If it doesn't find any, you will have to re-create one: I think gpart will work on image files.

Computer viking
May 30, 2011
Now with less breakage.

diremonk posted:

This is kind of a odd/dumb question, but is running ubuntu off a flash drive possible/feasible? Every page I've been to only deals with running a live version off the drive instead of a full install which is what I would rather have.

My reason is that I have a media server running XP right now but it seems to be a waste having a 500 gig drive with nothing but Windows, PS3 media server, and Air Video Server.

Uhm - if your reasoning is that you have a lot of free space, why would you then want to run off a USB stick instead of a partition on the HD? :confused:

Anyway, I believe you can boot off an installer CD (or another USB stick), then choose the USB stick as your install target.

Computer viking
May 30, 2011
Now with less breakage.

diremonk posted:

The live version seemed to work fine, but I'm not sure I like the idea of running a system 24x7 that I can't patch or update without creating a brand new install. Or am I wrong about that too?

I've dist-upgraded live images, and it can go both ways. It probably depends on the filesystem layout of the particular live image: It will need space in places live images usually don't, so if it's set up with /home being the only place with free space (or an rw mount), you'll have issues. Installing to the stick normally seems like a reasonable choice for continued usage.

Computer viking
May 30, 2011
Now with less breakage.

ruttivendolo posted:

Hoping this is the right thread:

I'm having troubles using Python with NLTK on Windows and I've decided to install Linux.
I've never used It and I'm a total newbie: which version would you reccomend (i.e the most user friendly for a windows user who just need to use Python)?

That's one if those things that are worth a thread of it's own (and I suspect I might have overlooked one). :)

Anyway. Ubuntu is a common choice. Most things just work, and there's a heap of documentation and forums especially for it. On the downside, the default UI (Unity) is a bit odd and has gotten a very divided reaction. It's not much work to install another UI, and there are other releases you can get that defaults to another one (e.g. Kubuntu, which defaults to KDE).

Computer viking
May 30, 2011
Now with less breakage.

Bob Morales posted:

There once was?

There was a period when it was arguably the easiest way to run bleeding-edge versions, if you wanted to try whatever new and exciting things were around. (These days I guess you'd just use a bunch of PPAs and hope for the best.)

Computer viking
May 30, 2011
Now with less breakage.

fletcher posted:

I'm looking for an SSH connection manager for ubuntu 10.04 - any recommendations? I've been using Putty since I have been using that on Windows.

Does it have to be graphical? Otherwise, you can do a lot with the config file. The manual is available with "man ssh_config", or I found a guide that could be useful here.

Computer viking
May 30, 2011
Now with less breakage.

Zom Aur posted:

I think there's a putty version for linux, but I've never used it myself.

Hah, I had completely forgotten that - I've even got it installed here. Seems to work very much like in windows.

Computer viking
May 30, 2011
Now with less breakage.

That sounds like a good idea to me. SuSE is about as different as you can get within modern linuxes with custom graphical tools, so yeah, it sounds like a good start. (Besides, I kind of like OpenSuSE. They tend to have good KDE defaults). The redhat/fedora family would make a lot of sense, too.

If you want something more exotic later, you could always play with FreeBSD - most of the programs are the same, but a lot of administrative things are not. It's an interesting system for getting a wider perspective (and an appreciation for automated tools). I'm very fond of it, but I wouldn't recommended it to anyone that haven't chosen it on their own. :D

Computer viking
May 30, 2011
Now with less breakage.

Social Animal posted:

Wow so it's for real? I'll admit I'm a little jealous.. please do tell how it is when it arrives. :)

Has that really been in doubt the last half year or so?
And yes, I look forward to the reviews. Might grab one myself when the supply stabilizes. :)

Computer viking
May 30, 2011
Now with less breakage.

Bob Morales posted:

Doesn't that really mean, "If I am out of memory, any and all programs that ask for more will crash and burn?"
I think the OOM killer preferentially kills whatever uses the most memory - which works out ok if something is leaky or otherwise eating a lot of ram for no good reason, but less so if you're trying to do something just on the edge of what you have resources for.

Edit: not to say that you're not also risking random programs failing with E_NOMEM.

Computer viking
May 30, 2011
Now with less breakage.

Bob Morales posted:

What prevented Objective-C and OpenStep from taking over Linux GUI software development? Was OpenStep not free and GNUstep was never finished/stable? It succeeded on OS X (but then again it's the only option)

At a guess, it was seen as an esoteric language while C++ was more ... established?

Computer viking
May 30, 2011
Now with less breakage.

Mr Darcy posted:

I got a cheap rear end netbook recently, and as it had a bloated windows install on it I thought I'd be cunning and put on Ubuntu 11.10

I installed it okay and have set it up fine (I think). I'm now now trying to install some programs from .tar files - thought I'd use the netbook for roguelikes, so trying to install nethack. Got into the tar file and extracted to a folder in my home drive. I now have no idea how to get the sodding thing to run.

Is there anywhere online that would show me the absolute basics of installing and running stuff on the o/s? All the sites I've seen so far seem to assume some linux knowledge.

I guess I need the absolute dummies guide if possible.

a) If at all possible, use the package system instead (try apt-cache search someGame to see if it exists, then sudo apt-get install someGame if it does)
b) If you do have to compile it from source: Open a terminal, cd to the right folder. Run ./configure ,which will definitely complain about missing libraries. Find and install those (ref. the apt-cache/apt-get method above), if there's a variant called -dev you want that one. Try again. Repeat until it's happy, then run make. If that works, you'll get an executable in the same directory, typically called the same as the game. Try running that with ./whateverName .

Computer viking
May 30, 2011
Now with less breakage.

Keito posted:

Come on people, don't scare the poor guy away with tales of terminals and compilation from the get-go. Just use the Add/Remove interface in the Applications menu. See this page from Ubuntu's documentation for more information.

Oh yes, you're right - I read that as "nethack-alikes" and though he had gotten tarballs of some obscure variants. I'd like to say that my plan a) was at least reasonable, but using the GUI tools is better.

Computer viking
May 30, 2011
Now with less breakage.

etcetera08 posted:

Anyone have suggestions for a good Quake-style drop-down terminal? The built in terminal in Ubuntu is okay for light use but I miss hotkey drop-down like I can get with iTerm in OS X. (Oh god how I miss iTerm...)
Yakuake, maybe? It's a kde app, and I don't use it myself, but it's been around for a while and get updates and so on.

Computer viking
May 30, 2011
Now with less breakage.

Kaluza-Klein posted:

This is an odd question, but how can I be 100% sure that the system is finished writing to external storage before I unmount it/remove it?

For example, copying a 500mb file via usb to a cell phone in a terminal takes just a couple of seconds, but the end result is a corrupted file if I unmount the drive right after.

If I do an md5sum right after, it matches the original, but if I then unmount with Nautilus, it will tell me not to unplug until writing is finished, and often that is not for another minute or so.

Am I going crazy or is something wrong here?

In theory, you should be able to call sync, and when it finishes all buffers should be written to permanent storage. Practically speaking, it's more of a hint that an order, so... it might or might not actually work in your case.

Edit: As the manpage says, sync schedules the writes, but can return before they finish. Given that the issue in your case seems to be huge buffers somewhere in the middle that make writes seem to finish long before anything is permanently done, well... try, but I'm pessimistic.

Computer viking fucked around with this message at 15:28 on Jun 1, 2012

Computer viking
May 30, 2011
Now with less breakage.

Social Animal posted:

Just installed xterm on my server for kicks. I will say it is pretty surreal to run Firefox on my server through SSH. Does anyone use xterm for any serious purpose? It looks like a very ancient and outdated thing, not to mention slow.

Also if I install a package (let's say emacs) and I want to remove it, doing the yum remove command only removes the package and not the dependencies. What if I want to remove the dependencies as well? Do people ever do that when removing packages or is it better to just keep the dependencies around?

Xterm is sort of the terminal of last resort - it's light, fast, and doesn't depend on anything noteworthy. Besides, it's actually updated fairly often, so it works quite well for most terminal things even if the interface is clunky.

I'm not sure what it has to do with X-forwarding, though?

Computer viking
May 30, 2011
Now with less breakage.

evol262 posted:

If you worry about this, all you really need is xauth and xhost.

And practically speaking, all you need to do is ssh -X. (or -Y, depending on settings. And you might want -C for compression, if the link has limited bandwidth.)

Computer viking
May 30, 2011
Now with less breakage.

If all your time fields are the same length, you can also use uniq -w, if I remember correctly and it prints the last duplicate. It might be the first, so do check.

Computer viking fucked around with this message at 16:13 on Jun 27, 2012

Computer viking
May 30, 2011
Now with less breakage.

ToxicFrog posted:

Per the man page: "matching lines are merged to the first occurrence."

This should work:

rev file | sort -u -K1,1 | rev

That's not in the manages on the ancient RHEL machine I checked on. ;)
But yeah, rev and sort and/or uniq sounds fine to me, and is a fairly easy to understand. Maybe a bit suboptimal on huge files (how efficient is rev?), but eh.

Computer viking
May 30, 2011
Now with less breakage.

evol262 posted:

Thinkpad yoga. Dell XPS

You can even buy an XPS preloaded with Ubuntu, I think. Even if you plan to reinstall it with something else that's a good sign of sorts.

Computer viking
May 30, 2011
Now with less breakage.

Deduplication and compression are the other things that can give similar results - du returning on-disk sizes keeps tripping me up on the ZFS file servers at work. It's admittedly right there in the name of the utility, but still.

Computer viking
May 30, 2011
Now with less breakage.

Alternatively, you could try a different clocksource (which is what generates the timer interrupts). I think the RHEL 7 guide here should work - just try all the available ones and see if it changes anything.

Computer viking
May 30, 2011
Now with less breakage.

Just for clarity, the difference between a softlink and a hardlink is that a hardlink isn't really a link at all: it creates another equally real name for the same underlying file. On the other hand, a softlink is more like a Windows shortcut: it's basically a magic file that contains a path.

This leads to some differences. You can create a softlink to anywhere, including files on other file systems, or nonexistent ones, or even entirely invalid paths. On the other hand, hardlinks only work within one file system, and you can only make one if the target exists and is a file (as opposed to, say, a directory).

If you delete a softlink, you just delete the link. If you delete the file it points to, you now have a link that points to nothing, much like a broken Windows shortcut. On the other hand, deleting a hardlink or the original is equivalent : both deletes one of the names of the file. (There no way to distinguish the "original" name). Every file has a "link count" of how many names it has, and it lives on as long as the count is above zero. (Holding a file open in a program also ups the link count, so if you delete the last name of an open file, it lives on anonymously until it's closed.)

Also, see the funkiness mentioned above about what happens when you replace hardlinked files.

As for "how", ln basically works like cp (the arguments are in the same order, something I've never been happy about how the manage explains). Use ln -s if you want to make a softlink; the default is hardlinks.

Computer viking fucked around with this message at 00:40 on Mar 1, 2019

Computer viking
May 30, 2011
Now with less breakage.

You can also fire up sshd and use putty - it's not ideal, but definitely a better terminal emulator than powershell.

Computer viking
May 30, 2011
Now with less breakage.

For DISPLAY, it helps to remember that X is a server, not entirely unlike sshd: programs connect and talk to it to show graphics and receive input - and you can have multiple servers on one machine, or even connect over a network.

The number is an abstract "which server", :0 is "the first one". If you connect over TCP, it determines the port number (:0 and :1 map to 6000 and 6001), while if you use a Unix socket it's the difference between /tmp/.X11-unix/X0 or /tmp/.X11-unix/X1. (The connection method is determined by what you set DISPLAY to: A bare :0 uses a socket, while 1.2.3.4:0 would use TCP).

As mentioned, it will be set correctly by something in your graphical login process. I'm not sure exactly whose responsibility it is to set DISPLAY - the login manager, perhaps? Either way, everything you start inside X will inherit an environment where it's set correctly. Cron does not; it's started outside and perhaps even before X is up.

Computer viking
May 30, 2011
Now with less breakage.

Hollow Talk posted:

Everything newer than the Core 2 supports x86-64, as do some older things. This should be fine.

All Core 2 as well, I believe; the phase-in was at the end of the Pentium 4.

Those late socket 775 Pentium4 machines are sort of neat - they're dog slow singlecore+HT things, but they are just on the edge of modern: 64-bit, PCIe, SATA, DDR2 or 3. I've got one at work running Win7 64-bit as an experiment, with 4GB RAM and an SSD it's borderline usable. (Dropping in a Core2 Quad would probably do wonders, though).

On the other end, the low end/early boards in the range are the last ones that will run Windows 98 with official drivers and feel somewhat period appropriate.

Computer viking
May 30, 2011
Now with less breakage.

waffle iron posted:

Am I the only weirdo that uses 'most' as a pager?

There's at least two of us.

Adbot
ADBOT LOVES YOU

Computer viking
May 30, 2011
Now with less breakage.

waffle iron posted:

Anti-aliasing and sub-pixel font rendering was useful for LCD screens with a low PPI/DPI. These days it's mostly not needed.

Unless you're using a desktop PC with a not-tiny, non-4k screen, which is still an incredibly common scenario. Non-HiDPI laptops (e.g. 1920x1080 15") are also far from rare.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply