Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Alowishus
Jan 8, 2002

My name is Mud

Toiletbrush posted:

If I want external clients or spammers to prevent using my SMTP server as relay, while being able to send from localhost to any host I want under any source address, I just need to set mynetworks_style=host, right?
That's right, as long as 'mynetworks' isn't set at all.

Adbot
ADBOT LOVES YOU

Alowishus
Jan 8, 2002

My name is Mud
Well you may have two problems... perhaps get things working by IP first to take BIND out of the equation.

But the most obvious thing to me is that you have BIND configured to only listen on 127.0.0.1 Your other computer isn't going to be able to talk to it that way.

Edit: and as for Shorewall, are you sure that IP forwarding is enabled in the kernel? Shorewall will do it for you if you have the right setting in shorewall.conf, otherwise you have to do it yourself and then set it permanently /etc/sysctl.conf.

To check:
code:
cat /proc/sys/net/ipv4/ip_forward
You want to see '1'.

Alowishus fucked around with this message at 00:09 on Mar 22, 2008

Alowishus
Jan 8, 2002

My name is Mud

Sergeant Hobo posted:

OK, that makes sense. I basically need to change it to listen on both itself and the external interface then?
It needs to listen on your *internal* interface, not external. I assume this is a two-NIC machine, right? It doesn't even have to listen on 127.0.0.1, but having it do so surely helps with troubleshooting. But also recall that Shorewall treats traffic from the firewall itself ($FW) completely differently than traffic coming from an internal network... so behavior on the firewall itself isn't always a good indicator of anything.

Anyway... change your listen-on and allow-recursion directives to include your internal IP, like:
code:
allow-recursion { 127.0.0.1; 192.168.1.1; };
listen-on { 127.0.0.1; 192.168.1.1; };
(Of course I'm making assumptions about your internal network numbering... adjust accordingly.)

Alowishus fucked around with this message at 00:38 on Mar 22, 2008

Alowishus
Jan 8, 2002

My name is Mud

Sergeant Hobo posted:

So I ended up trying dnsmasq and it worked. Don't know what was going on but as long as it works. Thanks for all the help.
Awesome... simplicity wins! :) Sorry it was such a process.

Alowishus
Jan 8, 2002

My name is Mud

Kidane posted:

Secondly, and this one is really stupid but -- I just installed postfix and I in the process of configuring it but I noticed it's not listening on ports 110 or 143. Do I need to install a separate POP or IMAP server?
Correct. Postfix is only meant to be a MTA - Mail Transport Agent. It's not responsible for handling anything other than the in/out SMTP routing side of the equation, plus handing inbound mail to a local delivery agent[1]. You'll need to install a separate IMAP/POP server... I generally recommend Dovecot, as it can serve both protocols (plus the SSL variants) and can deal with both Maildir and mbox storage. Check your package manager, it should be there...


[1] In a default Postfix install, its local component is doing the delivery, and that's where your .forward magic is happening. If your goal is to deliver everything to procmail anyway, you can reconfigure Postfix to use procmail directly and skip the extra .forward step. It's up to you, but thought I'd mention it... check the "mailbox_command" directive in main.cf if you're interested.

Alowishus
Jan 8, 2002

My name is Mud

Snozzberry Smoothie posted:

I'd like some help with SSH. I want to give SCP access to a co-worker so that she can access her files on the Debian servers from home, but I'm a little concerned about security. How can I configure SSH on her account so that she cannot browse outside of her home directory? Logging into her account, I can go to the parent directory, and while most of the files have access denied, she can still view directories.
There isn't an easy answer to this... it's difficult to chroot SSH sessions, though there are some patches available for OpenSSH that can do it. Google for "chroot ssh" or check out this link.

What files are you concerned about? If it's other users' homes, you can stop that by ensuring all directories under /home are 700, though this could potentially break Apache or any other daemon that serves content out of users homes (a workaround to this is to add Apache's user to each person's group). If it's general configuration stuff under /etc, you may not have a choice. If it's private configuration stuff under /etc such as passwords in config files or SSL keys, you should probably take a deeper look at your permissions setup.

Of course you can always use FTP since it's easier to chroot, and if security is critical that can be done over SSL.

Alowishus
Jan 8, 2002

My name is Mud
Well to be fair, Snozzberry isn't using it as a protective shield from hackers... he just wants to give a user scp access without the potential for casual poking around. Seems like the script he found will do the trick...

Surely it'd be a different story on a shared hosting server with shell access.

Edit: I think the built-in OpenSSH functionality comes in 4.8 when it's released.

Alowishus
Jan 8, 2002

My name is Mud

rugbert posted:

Hey Alowishus, have any suggestions for adding search functionality behind zope?
Behind Zope? One of the things that the Zope framework provides is a reasonably flexible and scalable search API... what are you wanting to accomplish?

Alowishus
Jan 8, 2002

My name is Mud

rugbert posted:

Our clients want a search feature on their web page and our web guy is out for a while. So now Im in charge of website maintenance too :\ I was thinking of just telling them to use the Google Enterprise search app.
Yeah if you can't or don't want to do it on the backend, a web crawler approach is your other choice. A Google Enterprise box would surely do the trick, or if the pages are publicly accessible then Google Custom Search Business Edition would work.

Or ht://dig or one of sixteen other open source crawlers.

Alowishus
Jan 8, 2002

My name is Mud

rugbert posted:

actually we cant use web crawlers so Im gunna have to do some back end stuff, could you point me into the right direction?
You're going to need to define a Catalog and some Indexes... some of those may exist in your site, you may just have to figure out how to query them.

Docs here in the Zope book.

Alowishus
Jan 8, 2002

My name is Mud

Feral Integral posted:

find /Music -name *.mp3 | scp user@backupserver:~/music

this doesn't really work at all :/ . What would you guys suggest?
First problem is that you should quote the wildcard in find.

Second problem is that your MP3 files probably have spaces in them, so you'll need to account for that with find.

Third problem is that you're only piping a big chunk of text to scp... it doesn't know that it's supposed to be a list of files to copy. You need xargs for this.

Here's what it should look like:
code:
find /Music -name '*.mp3' -print0 | xargs -0 -I '{}' scp '{}' user@backupserver:~/music/
So it's going to find all .mp3 files in /Music, and print the list out in a tab delimited format instead of space delimited (so that spaces in filenames don't throw it off). xargs is a utility that takes batches of file names from find and executes a command on that list of files. In this case, your list of files is going to be the source, not the destination, so you need to use the {} substitution method, otherwise xargs will put your file at the end of the command, which would be backward.

Rsync is still probably the better approach but thought I'd help with the method you were trying too. Do realize that this will effectively "flatten" any directory structure you had in your /Music folder when it reaches the backup server.

Alowishus
Jan 8, 2002

My name is Mud

rugbert posted:

the zope site is retarded. the server we inherited doesnt have zcatalog installed. But I cant find a link to download it anywhere, ive searched all over their CVS to no avail.
Zuh?? I won't argue that the Zope site is somewhat retarded, but ZCatalog is a fundamental piece of the base Zope distribution... any Zope 2.x tarball will have it, and the only way a server wouldn't have it installed is if someone purposely removed it... and at that point I'm not sure Zope would even start.

Were you expecting some sort of external utility? It should just be available as an object in the "Select type to add..." dropdown in the ZMI.

Alowishus
Jan 8, 2002

My name is Mud
And you'll appreciate DeltaCopy for your Window rsync setup... much easier than dealing with Cygwin and you get a (clunky) GUI in the deal.

Alowishus
Jan 8, 2002

My name is Mud

Overture posted:

I have googled the bounce messages in /var/log/mail.log to no avail. Here is an example:
Something is very weird in how your outbound mail is being addressed... perhaps there's some rewriting going on that shouldn't be? They key to me is the fact that the remote server is rejecting mail to "<bounce......-cullen=mazenti.com@energyconversation.org>"... how did cullen@mazenti.com get rewritten to something @energyconservation.org and wrapped with that bounce ID? Is your CRM trying to do some sort of VERP?

Alowishus
Jan 8, 2002

My name is Mud

Korthing posted:

Bash also provides the 'disown' command, it works similiar to nohup, but you can bg a running process and then 'disown' it from your terminal.
That's a new one to me... sounds convenient if you've forgotten to run something with nohup, but where does the output go?

Alowishus
Jan 8, 2002

My name is Mud

Jimmy Carter posted:

however, no matter what I put in for --newer-than, it still wants to download every file.
What version of lftp? The original behavior of --newer-than was to provide it with an existing filename and it would then download anything newer than that file. It wasn't until version 3.x that they added the capability to take at-ish time specifications like you're providing. Also I'm not fully clear on what are acceptable time specs... is 'week ago' sufficient or does it have to be '1 week ago'? Might try 'now-7days' as an alternative.

Alowishus
Jan 8, 2002

My name is Mud

jason posted:

Is any info dumped to disk when a kernel panic occurs? One of my RHEL3 servers crashed this morning but I wasn't in the office so I couldn't read the message on the console.
It *can* be - depends on whether your system was configured for crash dumps. It does seem like something that should just be done automatically, but that hasn't been the case historically.

Anyway, this RedHat magazine article goes through the whole process... if your system was already configured then maybe you have something to work with, and if not then at least you can do the configuration so that next time it panics you will get something.

Alowishus
Jan 8, 2002

My name is Mud
Hmm not quite sure what's going on with that easycam website, but it doesn't look like that it's set up to distribute a source package. If you want to build it yourself it looks like you're pretty much on your own... you'll need to use a tool like 'wget' to suck down the contents of http://blognux.free.fr/sources/EasyCam2/04032006_19:49/ which appears to be the latest source update.

It may still not work if the author didn't test compliation on non-x86 architectures, but I suppose it's worth a try.

Alowishus
Jan 8, 2002

My name is Mud

Harokey posted:

What's the best file system for this set up? I had been using ext3, but It has gotten corrupted so many times now if the power goes out or something like that. Would XFS be better?
Is power failure on the drive or system common? No filesystem is going to react well to hard power-off. XFS and ext3 (by default) only journal metadata - changes to filesystem structure - not user data. If the power goes out in the middle of a file write, you will wind up with a corrupt file, even though its location on disk won't be in question. :)

How do you mean corrupted? Just that it had to go through a lengthy re-check upon boot? Or did you have actual data loss? If you're only talking about occasional power failures and your complaint with ext3 was that it took too long to do a full re-check for consistency, then yes XFS is an excellent alternative. It's no less likely to get corrupt, but it will fix itself up more quickly.

If you are expecting regular power failures, then actually your best choice *is* ext3, but with the optional full data journaling turned on. This will slow your write performance somewhat, but it will cause every bit of data that gets written to the disk to be journaled, and thus make it recoverable (or reversible) without risk of corrupt files or long re-check times. As far as I know, ext3 is the only filesystem for Linux that can do full data journaling.

quote:

Also how should I mount it? I had just entered in the fstab, is there some better way ?
That's generally the right way to do it. If you don't necessarily want it mounted all the time, you might want to consider putting it under the care of autofs, which will automount it when you need and then dismount it when it's not in use. This may also help alleviate some of the problems discussed above.

Alowishus
Jan 8, 2002

My name is Mud

fletcher posted:

I'm a little confused on how to setup directories/permissions for apache. I'd like to login to my server as user fletch, and have all my virtual hosts in directories like ~/www/domain.com/. Apache runs as user apache group apache though, so I get a 403 no permission when I try to go to domain.com. This goes away if I chown -R apache.apache the directory, but I don't want to do that.
As long as every directory starting with /home/fletch has the o+x bit set, Apache should have no trouble serving your files. By default your home directory is generally chmod 700, which prevents Apache from serving anything inside. If this is a more recent system with SElinux enabled, that could also be getting in the way, but there's a boolean toggle that enables a policy which lets Apache serve content from user's homes.

Alowishus
Jan 8, 2002

My name is Mud

calandryll posted:

I am currently trying to configure CUPS on a fresh install of Hardy server. I'm trying to have my server act as a print server also. I keep getting a lot of time outs when trying to configure my printers using the web interface. Is there a command line stuff I can use for it?
'lpadmin' at the CLI can do pretty much anything the web interface can do. But I'd be concerned about the web timeouts... CUPS' web server is built into the daemon, so if there are web problems I wouldn't be shocked to subsequently see printing problems too. I'd bump up logging and take a look at the error logs to see what's going on...

Alowishus
Jan 8, 2002

My name is Mud

Twlight posted:

While this is some time out, I'd figure that I should learn more about linux mail systems. I might have to build one for work at some time and having one built and running on my home pc might be something id use as well. Where should I go about learning the in/outs of a particular program? I've been reading about postfix and it seems pretty good. I'd like to get calendar integration too, but at a much lower priority, as well. Where should I begin to search?
Realize that mail on Linux is modular. Postfix is generally part of the equation, but it's only a SMTP server. It's responsible for routing outbound mail, receiving incoming mail mail and passing it off to be delivered to users on the machine, or passing it on to another SMTP server. That's about it. If users want to pick up their mail via POP or IMAP, then you need to introduce something like Dovecot into the equation. If they want webmail, then you add something like Squirrelmail. If you want spam filtering for your incoming mail, then something like SpamAssassin can be added. Calendaring is another entire topic.

So, if you're interested in the inner workings of mail, then it's probably best to just take a clean install of something like CentOS or Ubuntu Server and start fiddling with some of the above components. However, if you're more interested in the end-product of having a functioning mail server, then throw all of the above out and just learn how to install and administer Zimbra. It's basically a turnkey mail system that puts all of the above components together for you, takes care of the integration, and slaps a very nice web interface on top.

Alowishus
Jan 8, 2002

My name is Mud

H0TSauce posted:

I tracked it back to a PHP configuration option that has --without-pear set.

Is there any way i can turn off this option, or am i faced with recompiling PHP?
Doesn't matter, that just prevented the base PEAR manager and libraries from being built when PHP was built. You can add it later by following these instructions.

Alowishus
Jan 8, 2002

My name is Mud

J. Elliot Razorledgeball posted:

I want to mount a samba share on startup by using fstab, but it doesn't work because the network gets brought up after fstab is run. This is on Fedora Core 7.
chryst is right on the money, and the script you need to run at startup is /etc/init.d/netfs

To make sure it runs at startup, 'chkconfig netfs on'. If it still doesn't mount then yeah there may be a formatting problem in your fstab...

Alowishus
Jan 8, 2002

My name is Mud

rugbert posted:

Is there a way to adopt one package management system for another? If Im going to get a laptop I should probably install Ubuntu. I hate apt-get tho, is there anyway of uninstalling it and using yum??
Try aptitude on Ubuntu, it's got more brains than apt-get... but really, the package managers all do roughly the same thing. Is it the fundamental packaging approach of Debian/Ubuntu that you don't like? It seems strange to "hate" a tool like apt-get...

Alowishus
Jan 8, 2002

My name is Mud

blitrig posted:

How would I go about starting KDE via SSH, log into it via VNC, do my stuff, and then shut it down again via SSH?
VNC can run headless on Linux... through SSH, install the tightvncserver package and then run the 'vncserver' command. You'll be prompted to set a password, and then a VNC-based X session will start up in the background. The first session will be :1 which maps to port 5901. The second will be :2 which maps to 5902 and so forth. You will probably have to edit ~/.vnc/xstartup to tell it to start KDE instead of twm. When you're done, just shut down KDE and the VNC server instance will terminate. If it doesn't, "vncserver -kill :1" will do the trick.

Alowishus
Jan 8, 2002

My name is Mud

Kenfoldsfive posted:

So clearly this has been deposited somewhere other than .config, and rather than trudge through my entire /usr/src/ directory Indiana Jones-style I thought I'd ask you guys for help. My sanity will thank you.
Yeah the local version string gets put in some Makefiles and other spots in addition to .config. Running 'make mrproper' should clean it up... that command will revert your source tree to basically the original state, so back up your .config.

Alowishus
Jan 8, 2002

My name is Mud

Grigori Rasputin posted:

Any idea how I can blow these files away?

They came in with bad permissions while using rsync's -p flag to persist permissions.
Well if you don't have root, then at least we know your user owns the files... it's probably just that they're marked something like 000 due to rsync's attempts at preserving permissions. Do you have command line access? If so, you can run through recursively with chmod and fix them all back to something more sane like 755 and then remove them.

Alowishus
Jan 8, 2002

My name is Mud
No, software RAID in Linux on any modern CPU should have a minimal if any performance hit. TheGreenBandit, what does your disk controller layout look like? Are these SATA or IDE drives? If SATA, are they configured as AHCI? Give us more hardware details...

Alowishus
Jan 8, 2002

My name is Mud
You can always 'cat /proc/mdstat' to see what the RAID is up to

Alowishus
Jan 8, 2002

My name is Mud
Do you need local GUI, or just GUI via VNC? Remember that a headless server can run remote VNC sessions... so if possible, save your memory by not running X on the server's display.

Also be sure your app works on a more recent distribution... if it's old enough to list kernel 2.2 and glibc 2.1 as its requirements, a 2.6 kernel and glibc 2.5 might piss it off.

Assuming it's happy with modern distros, I'd probably try Debian. A basic install can be done from one CD, and then you can add KDE and stuff through apt. Fedora 9 is going to be tight.

Alowishus
Jan 8, 2002

My name is Mud

aunaturale posted:

I have me a laptop at PII 187 mhz 192 MB of ram. Currently running Win XP Pro.

Any recommendations as to a linux system that will both run faster than windows on such a machine and that is easy to use? Thanks :)
Give drat Small Linux and Puppy Linux a look. I used Puppy on an old 233MHz laptop with about that much RAM and it worked pretty well.

Alowishus
Jan 8, 2002

My name is Mud
I'd add DenyHosts to the equation. It's perfect if you are going to be accessing SSH from enough potentially different IPs that using tcpwrappers is impractical... you can set it so that ~3 unsuccessful login attempts from any IP will get that IP blocked automatically. That plus good passwords and you should be in excellent shape.

Alowishus
Jan 8, 2002

My name is Mud

trilljester posted:

Also, what's the consensus here about KDE4? I've heard good and bad things about it. Mainly that 4.1.0 is not fully ready for use.
No, seems more like 4.0 was not fully ready but 4.1 polished it up significantly. Here's the Ars review, decide for yourself.

Alowishus
Jan 8, 2002

My name is Mud
If you're looking to do Linux related stuff in large corporate environments, CentOS is your best choice as it's just RedHat Enterprise Linux minus the support contract. Generally the networking tools are the same across distros, but if you have to get into configuring things like VLANs then the techniques become distro-specific, and knowing something RedHat-based will be most helpful.

Alowishus
Jan 8, 2002

My name is Mud

Steppo posted:

Is using symbolic links habitually a good practice? If not, would using it in this case be an exception? I doubt that there's enough demand on these documents to create some creepily absurd CPU overhead, with links going this way and that, and it does seem to be the most secure method, shy of FIXING THE MOTHERFUCKING CODE.
I don't see a technical problem with doing the symlinks in this situation. They shouldn't really cause much in the way of CPU overhead, maybe just a bit more disk activity. The biggest disadvantage is the massive administrative overhead that it will cause you. I guess you could set up a cron job that scans for any new documents and auto-symlinks them to the root... only potential problem there would be name collisions.

Alowishus
Jan 8, 2002

My name is Mud
:siren: Super Sekret Way To Figure Out Distribution And Version On Most Modern Linux Installs :siren:
code:
lsb_release -a
This works on every RedHat/CentOS and Fedora system I've tested going back to at least RHEL3, every Ubuntu and OpenSuSE I've tried, and at least Debian 4. I'd be curious to hear about Slackware.

The best thing is that you don't even have to have a vague guess about your distribution, since it abstracts all the /etc/*release|version* crap. Witness:
code:
-----
System 1
-----
$ lsb_release -a
Distributor ID:	Ubuntu
Description:	Ubuntu 8.04.1
Release:	8.04
Codename:	hardy

-----
System 2
-----
$ lsb_release -a
Distributor ID:	CentOS
Description:	CentOS release 5.2 (Final)
Release:	5.2
Codename:	Final

Alowishus
Jan 8, 2002

My name is Mud

Kane posted:

ADOPT ME

Any volunteers? You'll be helping science! :science:
I am a quick learner and promise I won't bug you too much or over trivial things.
I'm happy to help, and one place you'll find me is on the #shsc IRC channel that Saukkis wisely recommended. I'm idle there along with lots of other smart people at least during normal M-F workdays. If you'd rather go via IM, you're also welcome to hit me up via AIM (Alow00tshus). Of course as I post this I'm leaving the house, but I'll be on later this evening...

Alowishus
Jan 8, 2002

My name is Mud

StrikerJ posted:

Most new graphics cards can handle hardware decoding of things like mpeg2, mpeg4 and h.264 with a very low CPU useage, but from what I understand this isn't possible in Linux because of some driver issue? Is that correct and in that case, is it something that will be fixed?
Yes, nVidia's latest driver release for Linux has hardware decoding capabilities. I believe Intel is also making progress on drivers that enable ClearVideo on their newer chipsets. I don't know what ATI's status is.

quote:

Is the situation the same for all the hardware makers (Nvidia, ATI, Intel)? I guess I haven't really grasped the problems Linux usually seem to have with 3d acceleration. Is it because the vendors doesn't provide any drivers or just not open source drivers?
Most vendors are pretty good about their Linux support these days. Of the three you mentioned, Intel is tops with fully open source drivers. There are also open DRI drivers for older ATI chipsets, but for anything cutting edge from nVidia or ATI you're going to have to use their binary drivers. They work, the problem has just long been installation. ATI used to only provide RPMs that were built for a specific version of X11. nVidia was a little better about multi-distro support, but they insisted that you use their installer. These days, as long as you don't mind enabling non-free repositories, most modern distributions have repackaged versions of the proprietary drivers available for relatively easy installation. The newer versions of X11 are also getting much better at hardware detection and automatic configuration without requiring funny config file tweaking, so the situation is definitely improving.

Adbot
ADBOT LOVES YOU

Alowishus
Jan 8, 2002

My name is Mud

Jo posted:

I'd like to toss a liveCD onto my new machine (Euclid) and pass the extra cycles to Gauss for when I'm running gcc, digest, or rendering stuff.
The distcc live CD should do the trick for compiling stuff.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply