Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ExcessBLarg!
Sep 1, 2001

sund posted:

Every distro except gentoo. minicom looks perfect, thanks!

minicom -c on

Why that isn't default, I have no idea.

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

Lexical Unit posted:

I was able to use this to get a WM/X-session on different TTYs where there usually isn't a WM/X-session. However, if I switch to another TTY and then switch back, I'm back at the terminal where I ran the command and not in my WM. How can I start a WM on a TTY, switch away from it, and then switch back and re-pull-up the WM I had before I switched away?
It doesn't quite work that way. When you start X, it selects the next available virtual terminal for its use. If you're running six gettys (which is typically the case), then X will take tty7 for itself. To check, do an "Alt-Ctrl-F1" in X11 to switch to tty1, then do "Alt-F7" to switch back to X.

In addition, when you start your first X session, it locks "screen 0" for itself. An X screen is a totally unrelated concept to virtual terminals, so don't be confused on that point. The relevant part is that if you try to start a second X session, it will still try to lock screen 0 for itself. That's why you have to do "startx -- :OTHER_SCREEN_NUMER". However, regardless of which other screen number you use, it still allocates the next available virtual terminal for its use (typically tty8). So you can switch to your second X session with an "Alt-Ctrl-F8" (or "Alt-F8") from a console.

If you want X to use a specific virtual terminal, you should be able to specify it with "startx -- :OTHER_SCREEN_NUMBER vtOTHER_VIRTUAL_TERMINAL" (e.g., "startx -- :1 vt9") although I've not tried before.

ExcessBLarg!
Sep 1, 2001

rugbert posted:

Is the XDMCP protocol supposed to be slow as balls? The smallest task like, opening a folder takes a good 30+ seconds... is vnc faster?
Depending on the connection, the X11 network protocol is terribly slow. The problem is that the protocol does neither compression nor server side caching. This means that everytime you minimize, overlap, or move around a window on your X11 server, it forces an application-level redraw event, consuming a lot of unnecessary network bandwidth. 20 years ago, this wasn't that big of a deal since networks were relatively fast (used on LANs only), and application GUIs were relatively simple, containing few pixmaps and such.

You can improve the situation a bit by using ssh compression when tunneling, and using a compositing window manager (which will cache draw events so there should be fewer network roundtrips). Another alternative is to use NX, which aims to significantly improve the bandwidth and response time of remote X applications. NX, when it works, is really great--desktop suspend & resume support, applications that look as if they're running locally, and works just fine over DSL. The problem is that the semi-proprietary nature of NX means it's not well supported by many Linux distributions, and so it's a bit of a hassle to install.

VNC can be made to run pretty fast, but it's a much simpler (dumber) protocol, so often you have to trade speed for visual quality. In my experience, NX is both faster and better looking when it works, but VNC is much faster than regular remote X11 out of the box.

ExcessBLarg!
Sep 1, 2001

bitprophet posted:

The insanely terse typing that modality allows for works wonders on your hands/wrists -- far less mouse usage, far less typing, far less chording/stretching (hi Emacs). Not to mention it's simply faster in most cases.
I have perhaps an atypical but interesting use case. Often times for work I'm running various scripted jobs 24/7 and they have the tendency to occasionally fail for whatever reason. When they do I get an SMS.

My phone has a QWERTY(ish) keyboard and an SSH client. Since I do all my script editing with vim in a remote screen session, it's fairly trivial for me to pull up the session on my phone, hop in vim, and quickly search down to whatever needs to be fixed and plug it. The fact that I can do it from my phone is a huge convenience because it means I don't always need to be near a laptop/workstation, and it means my jobs can continue running without nodes being idle all the time. Few other editors would really let me do this because they would either require a mouse, or chorded keystrokes that are really difficult if not impossible to do on a phone keyboard.

Perhaps the right answer is to have a job where I don't have to carry the pager all the drat time. But I'm a PhD student and I have a huge incentive to see that the experiments I run complete in a timely manner.

ExcessBLarg!
Sep 1, 2001

xPanda posted:

why do I have to set up this bridge interface at all? Why couldn't the default eth0 do this sort of thing in the first place?
Because standard Ethernet drivers don't supply bridging functionality, and frankly you wouldn't want them to. The current separation of bridge devices and connecting Ethernet devices gives much greater flexibility to how you setup your network.

There's two main ways to setup the network for the VMs: routed IP or bridged Ethernet. If your VMs are all on private networks with the host acting as a NAT gateway, you can set up the network either way. If the VMs all have public IPs in the same network segment (or at least, should have addresses accessible from outside the VM host), you're going to have to go with bridged Ethernet.

For routed IP, the host machine would have a physical Ethernet interface (eth0, with an IP), and a bunch of tunX (IP-level tunnel) interfaces (or whatever libvirt uses) corresponding to each VM session. Each VM would be assigned an address in its own private subnet (e.g., 192.168.X.2) and the tunX interfaces also assigned an address for each private subnet (e.g., 192.168.X.1). From there you enable IP forwarding on the host and NAT on eth0. This gives the VMs NAT access to outside the host, and the VMs can communicate with each other since they know to route packets through the host.

Bridged Ethernet makes the most sense when the VMs are assigned IP addresses from the same subnet, either private or public. Again, the host machine has a physical eth0 interface, and there's a bunch of tapX (Ethernet-level tunnel) interfaces corresponding to each VM session. Each VM is assigned an address in a common subnet (e.g., 192.168.0.X+2). The tapX interfaces on the host do not have an IP assigned. Instead, you create an Ethernet bridge (br0) with the tapX interfaces connected to it. If the VMs only need to communicate with each other, and not the host or the outside world, you're done.

However, to get the VMs to communicate with the host, you have to assign the host a single IP on the VM bridge network, so you assign an IP (e.g., 192.168.0.1) to the br0 interface--which is the host's "Ethernet" connection to the bridge. It wouldn't make sense to assign it to one of the tapX interfaces, since those are connections from the VMs. This is still separate from the public, outside network on eth0. But if you want the VMs to contact outside, you can do the same thing: enable IP fowarding and NAT eth0.

Now, if you want your VMs to have "public"-facing IPs (where "public" is either actually public, or IPs on a private network that's common to machines outside the host, i.e., there's no NAT), then you assign IP addresses to your VMs as before (or just have them do DHCP from an external DHCP server), create a bridge br0, and connect all the tapX devices and the external Ethernet (eth0) device to the same bridge. This connects everything together at the Ethernet level, so the VMs can pass frames to external hosts without the VM host having to serve as an IP router. In this setup there's no NAT because the host is not acting as a router. If you want the VM host also on that network, you would assign an IP to the bridge (br0) device. I think you could get away with assigning an IP to the eth0 interface instead if you want, it shouldn't really matter, but I don't think that's convention.

The problem, it sounds like, is that you want the VMs to be on a private network and have the VM host serve as a NAT gateway, so that the VMs can contact other machines but it's all coming from a single IP address as far as external hosts are concerned; but, you're creating an Ethernet bridge and bridging all the VM hosts to the outside at the Ethernet level which "bypasses" the NAT.

If so, the simplest solution is to keep the private network bridge, "disconnect" eth0 from it, and assign the VM host an IP on the private network and do NAT on eth0 (see second example). You could also dump the bridge entirely and go routed IP (first example), but that would involve reconfiguring the networks on the VMs. In theory the latter is more efficient if you can use tun interfaces instead of tap, but in practice it doesn't matter.

xPanda posted:

Is it a really bad idea to build your own package/RPM and deploy that?"
If you really need the new functionality, it's totally reasonable to backport a RHEL 6 (or whatever) RPM to 5.5. Basically this usually means compiling the source RPM with RHEL 5.5's libc and such. In the Debian world, there's backport repositories that provide updated packages specifically for this purposes, and it wouldn't surprise me if CentOS or something provided something similar.

xPanda posted:

Is this why CentOS/RHEL is the Right Way to do enterprise linux?
What you describe is how you would deal with such package distribution--but it's not RHEL/CentOS specific, folks do the same with Debian and Ubuntu too.

RHEL is the "Right Way" to do enterprise Linux because of support contracts and because it serves as a single target for proprietary software vendors to support. So if you're running Oracle, DB2, or something on Linux, you're probably stuck with RHEL/CentOS.

If you're not looking for a support contract and not running proprietary software, RHEL is no better than SuSE, Debian, Ubuntu LTS or something, except perhaps for operator familiarity.

ExcessBLarg! fucked around with this message at 18:31 on Feb 18, 2011

ExcessBLarg!
Sep 1, 2001

ionn posted:

It would work well in practice for my needs, but it's not proper.
What's your need?

dmidecode is the exact answer to your question. But if you're just performing a machine/spec inventory of all the Linux boxes on a network, MemTotal (/proc/meminfo) rounded is probably the easiest obtainable answer.

/proc/kcore size, when it did "work" is still not entirely accurate. In recent kernels, similar information is available from the sum of DirectMap* (/proc/meminfo) entries. The problem is that on 4 GB+ (x86) machines, the amount of "physical memory" may be 0.5 GBish larger than actual as it includes the 32-bit MMIO address space since many BIOSes remap memory around it.

ExcessBLarg!
Sep 1, 2001

atomicthumbs posted:

I just got a shiny new VPS account and am looking for a logging program that can make some spiffy graphs. Are there any that keep a log of CPU usage/memory usage/etc. over time that I can look at later?
There's a bunch based around RRDtool: Ganglia, Munin etc. The benefit of these is that it's fairly simple to deploy a complete solution, but the downside is they lack long-term data retention (except for time averages) should you want to go back and look at network throughput or something for a specific day.

Sysstat is another alternative. In addition to the familiar utilities like iostat, pidstat, etc., it includes a collection daemon "sadc" that will round around /proc and pull out a bunch of global statistics on a specified time interval and write them to a binary log. There's a companion program, "sadf" that will dump this log into CSV or tabular text format, which you can then use in various statistical/plotting programs like R, gnuplot, etc. It requires a fair bit of setup to get plots as an output, but it's a great tool for keeping accurate measurements over a very long time--should you want to do that.

ExcessBLarg!
Sep 1, 2001

hootimus posted:

Is it okay to have the domain name for my local home network ending in .local?
.local is not one of the four reserved (RFC 2606) TLDs, so in theory it could be put into use by IANA but this is unlikely to ever happen. .local is used by many organizations for names of private-network machines, so there's plenty of precedence for what you want to do with it.

Also, Zeroconf uses .local, so as long as you use it in a consistent fashion (each machine is "hostname.local") then it's an obvious choice.

hootimus posted:

What do people typically use for their home networks?
I have a registered domain that I use for both public-facing and (private) home network machines. I probably wouldn't recommend buying a domain name for no other purpose, but if you think you might want one in the near future for use with a VPS or something, it's a reasonable option.

One nice benefit of that is that I run OpenVPN on a few machines outside my home network to VPN in. Hostname lookups go through regular DNS and always work with no fear of conflict.

Also, we have a .local private network at work that my home router, slyly, bridges with another VPN tunnel and resolves DNS requests for. So in my case, I can't use .local for my own stuff since that namespace is already claimed from my perspective.

ExcessBLarg! fucked around with this message at 16:45 on Feb 25, 2011

ExcessBLarg!
Sep 1, 2001

hootimus posted:

What's going on here?
Glibc is probably resolving .local via a method other than dns lookup.

What does your /etc/host.conf, /etc/nsswitch.conf, and /etc/resolv.conf contain? Are you running avahi?

ExcessBLarg!
Sep 1, 2001

hootimus posted:

My nsswitch.conf was hosed up.
I wouldn't say that's "hosed up" so much as "someone else decided that .local should only exist in the mdns universe," which is an arguably reasonable if not short-sighted policy.

Which distribution is this? It's worth knowing which are going to cause this kind of trouble in the future.

ExcessBLarg!
Sep 1, 2001

Misogynist posted:

Processes shouldn't be issuing SCSI commands.
Jörg Schilling and libscg disagrees with this.

He wrote cdrecord & libscg (which cdrecord uses) for Solaris/Linux/whatever. The idea is kind of cool, libscg is a generic SCSI transport layer so that the SCSI bus/commands may be observed/issued by userspace programs directly. This allowed cdrecord to be ported to a bunch of Unix variants that implemented their kernel-level SCSI stacks differently. While this made his CD burning tools widely available, they were for a long time a giant pain the rear end for Linux users to use. Especially when it was insisting on making SCSI calls directly to drives that weren't even SCSI devices (ATAPI, a SCSI-speaking ATA hybrid) and so required a bunch of hackery in the kernel to actually work.

He's been mad at Debian (and Linux in general) ever since Debian folks patched cdrecord to make it a bit easier to use (be able to run as a non-root user, allow users to specify drives uses a device name instead of SCSI bus:device:lun notation, etc.) He flipped his poo poo during a license controversy a few years later, and I don't think has been heard of much since.

ExcessBLarg! fucked around with this message at 17:20 on Feb 28, 2011

ExcessBLarg!
Sep 1, 2001

Spike McAwesome posted:

Any ideas what roadblocks or other nonsense I might run into?
Honestly the biggest problem with "unnecessarily" running amd64 desktop Linux in the past few years has been Flash support. Originally Flash required running an i386 web browser (requiring i386 libraries as well), although there were some hacks to allow amd64 Firefox to use i386 plugins. Then there was a beta build of Flash 10 for a some time that worked reasonably well, but most distributions dropped it when the last major Flash vulnerability was announced and no fix was released. But with Flash 10.1, amd64 Linux support is just fine so it's not really an issue anymore.

I don't really think twice anymore about which of i386 or amd64 Linux I "should" run, if the machine is amd64-compatible that's what it gets. Yes there's a marginal increase in memory usage, but I rarely use more than 500 MB on my desktop anyways--the rest is just page cache. The only real memory intensive thing I do is statistics stuff/number crunching, and since that's all floating point math the memory usage is the same on both architectures but amd64 is faster due to SSE being used by default.

Edit: Also, the memory issue a bit more complicated than it is on the face.

On one hand, Linux likes to map the (usable) physical address space into a contiguous VM region so it doesn't have to do any paging tricks. The default split of the (i386) 4 GB virtual address space is such that only machines with slightly-less than 1 GB of RAM could have all of its physical memory mapped, although there is a compile-time kernel option to change this at the cost of process memory size. So even on machines with just 1 GB of RAM, it's advantageous to run amd64 Linux if you can.

On the other hand, i386 machines with PAE can support up to 64 GB of memory. So if your machine supports PAE (all amd64 machines do?) you don't have to run amd64 to use 4 GB+ memory.

ExcessBLarg! fucked around with this message at 19:03 on Mar 6, 2011

ExcessBLarg!
Sep 1, 2001

Ninja Rope posted:

PAE sucks due to the fact kernel data structures grow larger to accommodate the increased amount of data that needs to be tracked about memory allocations/locations with PAE.
Sure, but amd64 long mode is even worse in this regard. It's the necessary cost of dealing with large memories.

I guess it would suck on an i386 machine with less-than 4 GB memory, running a PAE kernel, as I'd assume the kernel structures (not page tables) are sized at compile time.

ExcessBLarg!
Sep 1, 2001

ribena posted:

Finally, if flash support is that critical for you and my above suggestion isn't good enough then you could consider installing the i386 release if there's no major reason you need to be on amd64.
On Debian, checkout the "schroot" package. It makes it somewhat easy to install i386 Debian in a chroot, and run programs from it, while preserving access to your home directory through automatic bind mounts and such.

I actually use it more to test things out in different Debian releases (sid, squeeze, lenny, etc.), but it works fine for maintaining an i386 personality for the few things that just won't run in amd64.

That said, I haven't had much issue with flashplugin-nonfree 1:2.8.3.

ExcessBLarg!
Sep 1, 2001

FISHMANPET posted:

Highly recomend it if you need to see diffs, and are not a 60 year old greybeard that wrote diff.
I find that if you have to read a diff in CLI anyways, reading them in vim with "ft=diff" and syntax highlighting enabled makes it much easier.

ExcessBLarg!
Sep 1, 2001

Misogynist posted:

I strongly suggest that every administrator know that less.vim exists and is basically staring at you with big puppy eyes asking you to use it
Huh, didn't know about that, but I've been using "vim -" as my pager for a long time anyways. I guess the downside to that, aside from taking a bit longer to quit, is my creating vim swap files absolutely everywhere.

ExcessBLarg!
Sep 1, 2001

Modern Pragmatist posted:

If I connect via the external IP from home, it is about 5MBps because it has to be routed through my ISP's system so I can't take advantage of my gigabit network.
This sounds suboptimal.

Can you provide some details on your home network setup? Do you have something like 5 static IPs from your ISP but they're not on their own subnet? Do you have a router, or do you just have an Ethernet switch plugged into the modem?

Unless you have a really special situation, connecting to your server via internal or external IP shouldn't matter, under no scenario should traffic go out over your ISP connection and back in. If you have a router, it's misconfigured and should be fixed.

If you don't have a router, well, it might be worth getting one. But as a stop gap, you can likely add a "host route" for your server's external IP if your laptop is connected to the same network the server's external interface is on (i.e., the external & internal IPs are bound to the same network interface).

ExcessBLarg!
Sep 1, 2001

Modern Pragmatist posted:

The File Server is the one that I want to access it. I have one IP from my ISP.
So that's the IP of the router. What port forwards do you have enabled? SMB ports to the file server?

Either there's something else wrong, or your router is definitely broken. With your described network, under no circumstance should it be routing data to the first hop on your ISP, which then forwards the request back. I was imagining a scenario in which you had multiple addresses, in which a naive configuration could result in that scenario.

Can you post a traceroute?

Anyways, one alternative is a VPN setup (e.g., OpenVPN). Basically, you always contact the file server via it's internal address, and when you're away from home you run a VPN client on your laptop that provides access to your internal network. As a bonus, you can access other hosts this way, and it guarantees that all traffic is encrypted even if it's not at the application-protocol level.

Edit: What model wireless router is it?

ExcessBLarg! fucked around with this message at 18:00 on Apr 7, 2011

ExcessBLarg!
Sep 1, 2001

Kaluza-Klein posted:

I am looking through the debugging options for dpkg, but is there a way to see where it is looking for libc6, as I can find it all over my system.
It might be looking specifically for libc6 of i386 arch, which you won't have since libc6 is amd64 arch, but you do have libc6-i386. I thought there was a script to automatically fix i386 package dependencies for installation on amd64, but the name escapes me and Google reveals nothing on that.

Anyways, you can probably work around this by manually extracting the control file, changing the dependency to libc6-i386, and shoving it back into the deb. This is roughly done by:
code:
ar x foo.deb control.tar.gz
mkdir control; cd control
tar xzf ../control.tar.gz
sed -i 's/libc6/libc6-i386/g' control
tar czf ../control.tar.gz .
cd ..
ar r foo.deb control.tar.gz
But there might be more than one dependency that needs to be fixed, e.g., all the lib32 stuff.

If the important contents of the deb is a single file, it might be easier just to extract it and shove it wherever it needs to do without worrying about installing the whole thing.

ExcessBLarg!
Sep 1, 2001

taqueso posted:

I've been using rsnapshot to do hourly/daily/weekly snapshots of approximately 100G of data. The filesystem holding the snapshots is xfs. I've been getting some corruption: typically I am unable to delete some set of files (hardlinks from snapshotting) without taking the filesystem offline and repairing it.
I've been doing the same thing on jfs for six years now. I switched to jfs originally as its performance was better than ext3 but less buggy than xfs. I also figured IBM was more likely to long-term support jfs on Linux than SGI (or whomever they are now) would. I haven't tried ext4 for this task.

To date I've never had a corruption issue with hardlink deletions. I don't know if 100G if your total or transactional size, but for comparison, I churn through 400k inodes/hardlink updates daily on mine.

I did, once, have an issue when inserting new data on a recently online resized volume. The existing data was recoverable, and this was in a circumstance separate from the snapshotting/backups.

For long term archival I would still use ext3, possibly ext4. Those file systems are relatively simple, and I feel confident that in the absolute worst case scenario I can debugfs my way through them. But if the snapshot repository isn't your only backup archive and you need metadata performance, jfs might be worth a shot.

ExcessBLarg! fucked around with this message at 16:46 on Apr 13, 2011

ExcessBLarg!
Sep 1, 2001

waffle iron posted:

I find ext4 to be stable and reliable for desktop use, no idea if it would perform better than xfs, but certainly better than ext3.
Actually I'm not sure ext4 would perform any better for his purposes than ext3. Most of ext4's improvements (extents, multiblock & delayed allocation etc.) are in data-block management and for responsiveness in interactive desktop usage.

taqueso's bottleneck is pathname lookups for inode updates. ext3's H-trees should help here, but I don't think ext4 specifically improves on them. I'm not sure how much that compares to xfs/jfs's B+-trees. Six years ago I was primarily concerned with CPU overhead and didn't really care about snapshotting time. Minimizing disk seeks on lookups is probably going to get the most performance benefit though.

ExcessBLarg!
Sep 1, 2001

taqueso posted:

I am seriously considering a move to jfs,
If you can scrounge up another 100 GB temporarilly, or at least a subset of your snapshots, it would probably be worth doing some quick benchmark tests to see which of jfs and ext4 is faster. I'd be curious of the results actually.

taqueso posted:

Are there any sane choices to consider beyond ext4, jfs and xfs?
I don't think so. Personally I never thought reiserfs was a good choice since Hans was always an argumentative rear end in a top hat, and its development seemed pretty centered around him. Obviously after what's transpired, it was a wise decision to avoid it.

As mentioned, btrfs is the other main candidate, but I don't think it's a daily-driver yet.

ExcessBLarg!
Sep 1, 2001
Just to make clear, in case you weren't already aware:
code:
rsync -aH --numeric-ids /old_fs_mount/ /new_fs_mount
will copy over your existing snapshots preserving hardlinks. Very useful if you want to test without having to rebuild a snapshot repository.

Throw a "-S" in there if you also have sparse files, and a "-v" if you want it to take 10x longer to copy due to scrolling each file out to your terminal.

ExcessBLarg!
Sep 1, 2001

taqueso posted:

It looks like ext4 is the winner by a large margin, probably due to delete performance. Does anyone know the details of how/why it is so much better?
Cool! I'll definitely have to check that it out the next time I do a hardware upgrade.

Without digging into it and doing a bunch of additional benchmarking, I can't give an answer that isn't purely speculative. The two things I would consider are ext4's H-Tree directory indexes. Although they're available in ext3, it appears that they're on-by-default in ext4. The other is that ext4 may easily have a heuristic whereby if it sees a lot of unlinks for files in the same directory, it caches the directory entry or does some other speedup activity there. They've been doing a fair bit with heuristic optimization of traditional VFS weaknesses (have to unlink file-by-file instead of blowing away an entire directory at once).

In general, while the underlying design of jfs & xfs was to optimize throughput, they (and xfs in particular) were still designed with bulk transfers & streaing workloads in particular. Although the on-disk structure of ext4 is less fancy compared to modern file systems, the ext folks have really been pushing the driver to speedup interactive desktop performance. Something about the KDE folks making a gajillion dotfiles in your home directory and toching them all on boot and stuff.

Edit: I should've expected this earlier, metadata is a huge area for speedup since desktop files tend to be small. I didn't think about it at first as the more "controversial" features of ext4 that have hit the news were about data block management.

ExcessBLarg! fucked around with this message at 16:17 on Apr 15, 2011

ExcessBLarg!
Sep 1, 2001

Erasmus Darwin posted:

An extra 'cat' process means jack poo poo in terms of resources. Computer time is cheap. Programmer/user time is expensive. This has been a widely accepted maxim virtually everywhere EXCEPT for the useless cat issue.
I'm in agreement, but furthermore:

There's maybe an argument to removing superfluous cats from shell scripts, although more from a readability standpoint.

But for one liners? Who gives a poo poo about how computationally inefficient they are? I write some of the most inefficient one-line abominations known to man and it doesn't matter since they finish in a blink instead of 1/10th a blink. Either way, I have to run it once, grab the results I need and move on with life.

ExcessBLarg!
Sep 1, 2001

Modern Pragmatist posted:

Apparently this issue is resolved in 2.6.39.
2.6.39 has been in sid for a little while, although they recently updated to 2.6.39-2. You can go through the trouble of adding sid repositories and APT pinning so only the kernel is updated, but honestly it's easier to just download the deb, and "dpkg -i" it. Eventually it'll roll into wheezy and you'll get updates for it that way.

Or just wait a week and update.

ExcessBLarg!
Sep 1, 2001

nitrogen posted:

Your best bet is to install the source package for 2.6.39 and compile it yourself,
Compiling distribution kernels is slightly painful, assuming the process hasn't changed much in the past few years, it's one of the few areas where "dpkg-buildpackage" doesn't do the right thing. Or maybe it does, if the result you want is 13 different kernel flavors.

nitrogen posted:

I don't think there's a binary package for it yet native to wheezy.
Generally speaking packages aren't ever native for testing. They're built for/on unstable (or experimental), released there, and get pushed to testing after a weekish with no serious bugs filed against them. The only issue with installing packages from unstable on testing are versioned dependencies might have to be pulled in, but that's not an issue here.

nitrogen posted:

Why exactly are you running unstable anyway? It's way too early in the release cycle for Wheezy to be running unstable for anything other than a toy, IMHO.
I've been running sid since the early 2000s, and a continually-upgraded same install of sid since 2007. I wouldn't use it for production poo poo, but if you're willing to handle the twice-a-year things that break, it's not horrible to use on a personal machine.

I wouldn't recommend anyone run unstable who is unwilling to file a bug report though. That's one of the major points of running unstable vs. testing, to file bug reports when poo poo breaks.

ExcessBLarg!
Sep 1, 2001

nitrogen posted:

http://backports.debian.org/Instructions/ that might be a better way to accomplish what you want.
Backports is more for when you want to run a stable, production system, but need one or two newer versions of packages than what the shipping version offers.

If he's coming from Fedora, testing sounds entirely appropriate. Also, who's to say his kernel bug isn't an issue in 2.6.32?

ExcessBLarg!
Sep 1, 2001

dolicf posted:

Anyone pretty familiar with Ghostscript?
It looks like it might be coughing on FlateDecode. By chance did you compile Ghostscript without the zlib development package installed? Usually things complain, but Ghostscript might've just disabled support for it since I don't think it's used that extensively by PostScript (but very much so by PDF).

Anyways, the real problem is that your client is dealing with PDF files that require (PDF 1.5) JPXDecode support for JPEG2000-encoded images. Ghostscript (should) support this if compiled with JasPer, which I imagine Ghostscript is optionally dependent on so you'll need to make sure that library (and any dev package) is installed before building.

Depending on how flexible your client is, a better/alternate option might be to install Poppler (forked from Xpdf). In particular, the "pdftoppm" program can be used to rasterize a PDF to ppm or png formats, and you can use ImageMagick's "convert" to make JPEGs out of those. I generally try pdftoppm first as Ghostscript is slow and sometimes buggy, although I've encountered PDFs that Poppler craps out on but Ghostscript handles like a champ.

Anyways, Poppler supports JPXDecode via OpenJPEG, so you'll need the relevant library installed for that if you're building from source again.

ExcessBLarg!
Sep 1, 2001

Megaman posted:

1) how do I install ATI drivers
It's been a while since I last used an ATI card, and it's quite old by now (R300)? The reverse-engineered driver worked best, and should "just work" if you have the xserver-xorg-video-ati package installed. You may have to pull firmware-linux-nonfree too. See AtiHowto wiki page.

If you want to use the proprietary fglrx driver, which may have the best support for whatever card you have, but has otherwise always sucked, you'll have to do some magic. In sid, it looks like installing the "fglrx-driver" should take care of everything but fixing up /etc/X11/xorg.conf. The procedure might be different in squeeze. Anyways, see ATIProprietary.

Megaman posted:

3) how do i disable capslock on login
Again it's been a while, but in squeeze and after you should be able to disable capslock system-wide by "dpkg-reconfigure keyboard-configuration" and entering "ctrl:swapcaps" or "ctrl:nocaps" (for swapping with ctrl, or replacing so you have two controls) at the prompt that allows you to enter arbitrary string options. You may have to install the keyboard-configuration package first, as I think it was optional at one point.

ExcessBLarg!
Sep 1, 2001

brc64 posted:

but what I was really wanting was an identical archive, not a somewhat compressed one.
I use "vobcopy -m" for disks that just have CSS. For ones that are crazy mangled with additional protection I go straight to AnyDVD since everything else is a waste of time.

ExcessBLarg!
Sep 1, 2001

Ziir posted:

That's what I'm doing right now so sometimes I'll have 5+ SSH connections to a single system, but I'm sure there must be a bette way to do this?
Screen has already been mentioned, but that's not what you want if you want to be able to run multiple ssh sessions in different terminals (windows) or something.

An alternative is OpenSSH's control socket feature. For you first ssh session, run:
code:
ssh -MS /tmp/bar.sock foo@bar.org
which starts a regular shell session, where you have to authenticate as usual. But it also creates a /tmp/bar.sock socket. Now, when you want to open subsequent shells:
code:
ssh -S /tmp/bar.sock whatever
which opens another shell session to foo@bar.org. Note you don't have to put the user and hostname in again, but you'll have to put something (anything) in as a hostname, "whatever" is perfectly valid. This second (and any subsequent) session is multiplexed over the first ssh, so you don't have to authenticate again.

Now, if you kill the first ssh session, all subsequent sessions die. One way around that is to open a terminal, run:
code:
ssh -NMS /tmp/bar.sock foo@bar.org
which authenticates you, but doesn't start a shell. Then you can minimize this terminal but leave ssh running until you're done or the day or whatever. At that point, any "ssh -S /tmp/bar.sock"s will work without further authentication. This is also great for scp, rsync, etc.

ExcessBLarg! fucked around with this message at 01:05 on Jul 1, 2011

ExcessBLarg!
Sep 1, 2001

SynVisions posted:

Why not just use key based authentication?
1. Some equipment I use doesn't/can't do pubkey auth. Actually, this is on two extremes. The first are secure machines where the only allowed login method is two-factor, PIN + OTP token--typing in a new password after every rsync is extremely annoying. The second are APC console servers which, to my knowledge, only do password auth, although I don't think these guys support multiplexing of sessions over the same socket either.

2. Pubkey auth doesn't always play nice with single-sign on (e.g., Kerberos) systems. Ideally I already have Kerberos tickets on the machine I'm using, so any ssh connections are password-less anyways (GSSAPI auth). However, if I'm on an old machine whose ssh doesn't have proper GSSAPI support, or if I'm working on machines in two realms, sometimes it's easier to just use password auth then it is to use kpagsh to setup a second credentials cache.

3. When working with a temporary machine, or a machine that gets replaced about as often as I log into it, or some other infrequently used riff-raff hardware, it's easier to setup a control socket with password auth for the few minutes I need to do work on it, then it is to setup an authorized_keys file I'm never going to use again.

4. Key exchange is really slow on old Sun hardware, like "30 seconds" slow. Although I haven't had to use them in some years, any (secure) mechanism to avoid unnecessary key exchange is a blessing on these guys.

Most of the time I do use pubkey or GSSAPI, and I don't really care about the number of sockets I have outstanding. But there are some odd-ball scenarios where having the ability to multiplex sessions is nice for whatever reason.

ExcessBLarg!
Sep 1, 2001

brc64 posted:

it's scrolling past the buffer in putty so I can't easily review the list to determine which ones I need. I'd like to redirect the output to a text file that I can quickly scroll through, so I tried:

HandBrakeCLI -t 0 -i /media/cdrom > dvd.txt
Another tip: you can also do:
code:
HandBrakeCLI -t 0 -i /media/cdrom | tee dvd.txt
to have the output both printed to the console and redirected to a file.

I realize the above example doesn't quite work for reasons already discussed, but you may find the general idea to be useful.

ExcessBLarg!
Sep 1, 2001

Bob Morales posted:

I hosed UP REAL BAD
What version of Debian and how up-to-date is it?

If it's out of date with regard to the latest security patches, your best bet at this point might be to run a local root exploit from your user account, fix up /etc/password or add /bin/nologin to /etc/shells, then su properly and clean up the mess.

ExcessBLarg!
Sep 1, 2001

angrytech posted:

I'm curious here, what sort of problems is it possible to have with apt?
It's not so much apt as much as it is packages with inconsistent dependencies. I doubt these are ever seen in Debian testing and never seen in stable, but they happen somewhat often in unstable.

Typically a base library will be upgraded, but some library or application on top of that specifies a versioned dependency older than the updated library. Usually an "apt-get dist-upgrade" will report that the base library can't be upgraded yet, and that's fine. Other times it will suggest upgrading that library and removing a quarter of the packages on the system or something, that's not fine. But it's not a problem if you're paying attention, or just running "apt-get upgrade".

The real problem happens when you need to install a package in unstable that's suffering from an old versioned dependency. Since that package wasn't installed previously, the base library is likely already upgraded. If you attempt to install the package, it claims it can't because of an unsatisfied dependency. If you try to install the dependency, it spews various version numbers and a bunch of other babble.

But I don't really blame apt for any of that, it's just one of the things you run into when running unstable on machines. I've been doing that for over a decade now, and having the latest everything (well, almost) has been worth the occasional broken package. One of these days I might decide I've had enough and just drop down to testing once a new stable release nears (when even unstable goes into a semi-frozen state).

Edit: Also, if a package has a broken post-inst script, or attempts to overwrite a file that exists in another package, apt seizes up a bit and fun can be had. Usually a "dpkg --configure -a" flushes most of the queue and the rest can be handled by judicious use of "dpkg --force-overwrite" or manually hacking the post-inst script to return 0 at the right place. If I'm not in a hurry, it's bug report time.

ExcessBLarg! fucked around with this message at 04:47 on Jul 12, 2011

ExcessBLarg!
Sep 1, 2001

Erasmus Darwin posted:

If it's the latter, he might be able to use a curses-based pseudo-GUI such as Midnight Commander (available as the "mc" package on RedHat-based distros) or he could use an X-based file manager with the display exported to his Mac (which should be a fairly well-documented process).
sshfs (w/MacFUSE), or even SFTP?

ExcessBLarg!
Sep 1, 2001

Needs More Ditka posted:

I'm trying to learn more about the Linux end of Linux (text files, commands, etc) rather than the graphical, but I'm also operating on a computer that needs to work.
If you're looking to figure out how to interact with the command line and use various terminal-based programs, the distribution doesn't really matter. If the machine "needs to work", you might want to mess around in a second user account so you don't accidentally blow anything away.

If you're more interested in a command-line approach to system administration, Debian is a reasonable choice. It's not GUI-centric and completely GUI agnostic, unlike my limited experience with Ubuntu.

Do keep in mind that much of the system administration stuff one does in Debian is fairly Debian (derivative) specific, although analogues exist in Red Hat. Yes, source compiling a tarball is pretty much the same in any distribution, but then you're evading your distribution-specific mechanisms for packaging and dependency tracking and such.

ExcessBLarg!
Sep 1, 2001

Bob Morales posted:

Is there something more basic I can use?
If it doesn't need to be live, you can use sadc from sysstat to collect all of those metrics (and more!) in an activity file which you can dump and copy off once a day or so. From there you can use sadf to dump it out to CSV and plot it with whatever.

Otherwise if you're looking for a turn-key RRDtool + web front-end solution, Ganglia and Munin are OK.

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

Sojourner posted:

I'm looking to build a linux router appliance that will act as a layer 4 router.
See Linux Advanced Routing mini HOWTO, the second example is basically what you want.

Basically you use netfilter/iptables to mark packets that meet a particular criteria, namely the packet type and destination port. Then you add routing rules (using advanced "ip route" command, not traditional "route") that match the marked packets and send them over the right interface/gateway.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply