Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
JHVH-1
Jun 28, 2002

Bob Morales posted:

When they say 'unstable' they don't mean that in terms of crashing or anything like that.

Theres the occasional chance of package conflicts and whatever since it has newer versions of software. I wouldn't rule it out, but its not like they are pushing development builds into the packages or anything. Debian is just anal about what goes into stable. I ran it as a desktop for a long time.

Adbot
ADBOT LOVES YOU

Xenomorph
Jun 13, 2001

waffle iron posted:

Add _netdev as an option in your fstab file. That makes it not be mounted until your network connection is up.

Edit: Alternatively you could use autofs (or similar) so that the NFS share is only mounted when the mountpoint is accessed.

_netdev didn't seem make any difference.

What the user ended up doing is something like this:

code:
server:/whatever/share /bacon nfs defaults 0 0
server:/whatever/share /bacon nfs defaults 0 0
server:/whatever/share /bacon nfs defaults 0 0
The first two give the error that the network isn't up, and by the time the third one tried, the network is loaded then and the NFS mount is established.

I saw on some other forum where a user was putting in the "sleep" command in the system's NFS mount script. I checked /etc/rc.d/rc5.d/S60nfs, but didn't see anything that matched up with the example I found. Just as something to try, we renamed S60nfs to S94nfs to see if it would try to load it later in the startup. It still tried to load NFS before the network was up.

We may look into the automount thing next.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bob Morales posted:

When they say 'unstable' they don't mean that in terms of crashing or anything like that.
To clarify, they're talking about ABI and configuration stability. When you look at an enterprise distribution like Red Hat Enterprise Linux (or hell, even Debian Stable), they supply a release with an implicit contract that except where explicitly documented, the behavior of your system should never change between upgrades. This often means that there's one supported version of a program for the effective lifetime of the distribution, but what it really means is that when you install a new version of a package, the config file format shouldn't change. It shouldn't start arbitrarily gutting libraries and replacing them with new, incompatible versions. Nothing on your system should ever cease to function on account of a package update.

This isn't really the case with rolling-release distributions like Debian Unstable or Arch -- as new things come out, they're added to the repository. You may go to perform a package update across your OS and discover afterwards that hey, XFree86 is gone, and now X.org is installed and it's putting config files in a different place and your GUI won't start. There's breakage when you update, but if you aren't in the middle of upgrading your packages, what you have on your system should be pretty solid.

In general, commonly-used things are vetted pretty well (in experimental) before they're put into unstable. You mostly have to be concerned with config breakages, which are rarely a problem if you know what you're doing.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I feel stupid asking this, but I use linux rarely enough that I always forget the most basic things...

Say I've got a directory:

code:
drwxrw-rw- 558 therms therms      32768 2011-06-03 01:26 awesome_dir
I symlink that in /var/www to share its contents (dirs and files) via apache.

Apache runs as www-data. What's the best way to make it so that Apache can serve the files/directories in awesome_dir while making sure I don't gently caress up permissions for other things Apache is serving and for other things accessing awesome_dir?

Fcdts26
Mar 18, 2009
For some reason the last few weeks a directory I have shared stops sharing and a white lock appears. If I go in and redo the permissions it works just fine again. This has been randomly happening, sometimes it works great for a few days but today I've had to reset it 3 times. I'm pretty much a noob. Running 10.04 Thanks!

covener
Jan 10, 2004

You know, for kids!

Thermopyle posted:

I feel stupid asking this, but I use linux rarely enough that I always forget the most basic things...

Say I've got a directory:

code:
drwxrw-rw- 558 therms therms      32768 2011-06-03 01:26 awesome_dir
I symlink that in /var/www to share its contents (dirs and files) via apache.

Apache runs as www-data. What's the best way to make it so that Apache can serve the files/directories in awesome_dir while making sure I don't gently caress up permissions for other things Apache is serving and for other things accessing awesome_dir?

Remove world-writability from that dir and any files (chmod -R o-w awesome_dir/). Add world-execute (search) to that dir and any below it (find awesome_dir/ -type d | xargs chmod o+rx)

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I've got 15 USB external hard drives NTFS formatted, and I want all their contents to be identical. I found this that shows how to do it with dd. That's fine, and good, but these drives are 500 GB and I'm only going to have 60-70 GB on the drive. I'm wondering if anyone knows a way I could do this with something like partimage or ntfsclone or ntfs resize or something, so that I'm only writing the data to the disk, and not 400 GB of zeros, but also have the full capacity of the drive on the filesystem.

Inquisitus
Aug 4, 2006

I have a large barge with a radio antenna on it.

evol262 posted:

Just to say, you're ordering a VPS. Learn to manage it yourself. You could install Gentoo in a chroot and swap over to it from there. Makes no difference to them, really. Templates are for their convenience, but you can do whatever.

You're quite right :)

ClosedBSD posted:

Debian unstable is actually pretty stable - probably more so than Ubuntu

Also you don't have to ask them to set you up with unstable, if they can set you up with stable you can either make the switch manually or grab smxi and do it the easy way.

Well, I went for Debian stable, and tried upgrading by modifying sources.list and running apt-get dist-upgrade et al, but it shat itself and told me that some of the packages it was trying to upgrade needed a newer kernel version (it's running 2.6.18). I fired off a support ticket and they say it's a restriction in OpenVZ and that I need to look at Xen instead if I want a newer Kernel.

Does this screw me over with regards to upgrading to unstable, or is there a way round it?

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

FISHMANPET posted:

I've got 15 USB external hard drives NTFS formatted, and I want all their contents to be identical. I found this that shows how to do it with dd. That's fine, and good, but these drives are 500 GB and I'm only going to have 60-70 GB on the drive. I'm wondering if anyone knows a way I could do this with something like partimage or ntfsclone or ntfs resize or something, so that I'm only writing the data to the disk, and not 400 GB of zeros, but also have the full capacity of the drive on the filesystem.

Pretty sure Clonezilla can do that, but I don't remember which tool it uses under the hood to do so. (ntfsclone does sound about right)

text editor
Jan 8, 2007

Inquisitus posted:

You're quite right :)


Well, I went for Debian stable, and tried upgrading by modifying sources.list and running apt-get dist-upgrade et al, but it shat itself and told me that some of the packages it was trying to upgrade needed a newer kernel version (it's running 2.6.18). I fired off a support ticket and they say it's a restriction in OpenVZ and that I need to look at Xen instead if I want a newer Kernel.

Does this screw me over with regards to upgrading to unstable, or is there a way round it?

I know it's possible to manually upgrade to unstable without a moving to a new kernel, but once again I'm going to have to recommend using smxi to do it - it just makes things so easy.

I used it last night to push my VPS to unstable for the Mumble server I was setting up, and it gave me the option of leaving my old kernel. Sure enough, as I check it now, I am still using my provider's Xen-optimized kernel; uname -a says it's 2.6.39-linode33.


Edit: nvm maybe I left it on Debian stable, lemme see if I can figure it out

text editor fucked around with this message at 16:10 on Jun 10, 2011

Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine
Will an every-minute cron job always execute every minute? By that, I mean, will load level ever conceivably delay it from executing in a particular minute? (of course, if load level's that bad, we have worse problems)

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Golbez posted:

Will an every-minute cron job always execute every minute? By that, I mean, will load level ever conceivably delay it from executing in a particular minute? (of course, if load level's that bad, we have worse problems)

I was interested in the answer to this so I dug around in the cron source a bit:

vixie cron source posted:

/* the task here is to figure out how long it's going to be until :00 of the
* following minute and initialize TargetTime to this value. TargetTime
* will subsequently slide 60 seconds at a time, with correction applied
* implicitly in cron_sleep(). it would be nice to let cron execute in
* the "current minute" before going to sleep, but by restarting cron you
* could then get it to execute a given minute's jobs more than once.
* instead we have the chance of missing a minute's jobs completely, but
* that's something sysadmin's know to expect what with crashing computers.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
If you're trying to get something to run every minute it would probably be better to get a daemon of some kind to run it directly instead of with cron.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Can anyone help setting up unixODBC to work with MS Access databases (.mdb)? I have a python script, using pyodbc, that works under Windows. I would really like it to run under linux, though. Is there anything besides the easysoft driver?

ed: should be unixODBC, not openODBC

taqueso fucked around with this message at 18:50 on Jun 10, 2011

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:

taqueso posted:

Can anyone help setting up openODBC to work with MS Access databases (.mdb)? I have a python script, using pyodbc, that works under Windows. I would really like it to run under linux, though. Is there anything besides the easysoft driver?

Me too, currently I'm using a locally-modified version of "mdbtools" which is a bit flaky and quite ancient. I'm just exporting the .mdb into MySQL format and working with the result, but it's a read-only process.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

FeloniousDrunk posted:

Me too, currently I'm using a locally-modified version of "mdbtools" which is a bit flaky and quite ancient. I'm just exporting the .mdb into MySQL format and working with the result, but it's a read-only process.

I have some stuff I hacked together using mdbtools awhile ago. It had trouble with floats (IIRC, could be some other data type) being corrupted or otherwise read wrong. Any chance this is what you fixed in your locally modified version?

I gave up on mdbtools because the whole thing seemed like such a mess, but if it is the only way...

angrytech
Jun 26, 2009
Does anyone know anything about bind? I'm setting it up on my server and it seems to be having problems.
/var/log/syslog shows:
code:
Jun 10 13:10:34 mydomain named[29818]: zone mydomain.net/IN: NS 'mydomain.net' has no address records (A or AAAA)
Jun 10 13:10:34 mydomain named[29818]: zone mydomain.net/IN: not loaded due to errors.
Jun 10 13:10:34 mydomain named[29818]: running
/etc/bind/zones/mydomain.net.db has:
code:
$TTL 86400
mydomain.net.      IN      SOA     8.8.8.8 admin.mydomain.net. (
                                                        2006081401
                                                        28800
                                                        3600
                                                        604800
                                                        38400
 )
mydomain.net.    IN      NS              mydomain.net.
mydomain.net.    IN      MX     0        mydomain.net.
mydomain	  IN	  A	   	  int.ern.al.ip 

Sir Sidney Poitier
Aug 14, 2006

My favourite actor


I'm trying to set up HA clustering for Apache on two CentOS nodes using heartbeat. I'm following this tutorial:

http://www.howtoforge.com/high_availability_heartbeat_centos

I've got the files configured as they say there (though with different IPs and node names, properly substituted) and the issue I am having is that when I try and start apache it says:

(99)Cannot assign requested address: make_sock: could not bind to address <virtual IP>:80
no listening sockets available, shutting down

Where I think the problem is happening is this point of the tutorial: 14. We don't need to create a virtual network interface and assign an IP address (172.16.4.82) to it. Heartbeat will do this for you, and start the service (httpd) itself. So don't worry about this.

Does anyone know why httpd is giving this error or how I can fix it? At present, the virtual IP is assigned to no interface on either node.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

taqueso posted:

I was interested in the answer to this so I dug around in the cron source a bit:
I think that just means that if you stop cron at 11:59:59 and start it at 12:00:00, it won't run jobs for 12:00. It should still try to run jobs if for some reason the process scheduler doesn't run it during the minute it's supposed to.

evol262
Nov 30, 2010
#!/usr/bin/perl

angrytech posted:

Does anyone know anything about bind? I'm setting it up on my server and it seems to be having problems.
/var/log/syslog shows:
code:
Jun 10 13:10:34 mydomain named[29818]: zone mydomain.net/IN: NS 'mydomain.net' has no address records (A or AAAA)
Jun 10 13:10:34 mydomain named[29818]: zone mydomain.net/IN: not loaded due to errors.
Jun 10 13:10:34 mydomain named[29818]: running
/etc/bind/zones/mydomain.net.db has:
code:
$TTL 86400
mydomain.net.      IN      SOA     8.8.8.8 admin.mydomain.net. (
                                                        2006081401
                                                        28800
                                                        3600
                                                        604800
                                                        38400
 )
mydomain.net.    IN      NS              mydomain.net.
mydomain.net.    IN      MX     0        mydomain.net.
mydomain	  IN	  A	   	  int.ern.al.ip 

That's not how you configure BIND. I don't see reverse DNS either, but eh.
code:
$TTL 86400
mydomain.net.     IN   SOA mydomain.net. root.mydomain.net. {
#serial and poo poo here
                       }
                  IN   NS ns1.domain.net.
                  IN   MX mail.mydomain.net.

ns1               IN   A $ip_address
mail              CNAME  ns1

pram
Jun 10, 2001

Anjow posted:


(99)Cannot assign requested address: make_sock: could not bind to address <virtual IP>:80
no listening sockets available, shutting down

Where I think the problem is happening is this point of the tutorial: 14. We don't need to create a virtual network interface and assign an IP address (172.16.4.82) to it. Heartbeat will do this for you, and start the service (httpd) itself. So don't worry about this.

Does anyone know why httpd is giving this error or how I can fix it? At present, the virtual IP is assigned to no interface on either node.

This is probably really obvious but theres nothing listening on port 80 when you do netstat -nplat right

Sir Sidney Poitier
Aug 14, 2006

My favourite actor


Pram posted:

This is probably really obvious but theres nothing listening on port 80 when you do netstat -nplat right

That's correct. Forgive me if there are any stupid mistakes, I'm only doing this as a learning process - it's not for a production environment.

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:

taqueso posted:

I have some stuff I hacked together using mdbtools awhile ago. It had trouble with floats (IIRC, could be some other data type) being corrupted or otherwise read wrong. Any chance this is what you fixed in your locally modified version?

I gave up on mdbtools because the whole thing seemed like such a mess, but if it is the only way...

Nope, no floats in what I have to work with, but whoever made the Access db thought it would be fun to have columns titled "Student?" and other stuff that MySQL doesn't like so I just munged the column names and added in backtick quoting for the export because elsewhere they were using some MySQL keyword as a column name.

Also something about boolean fields being 'Y' or 'N' I think, but it was a while ago.

evol262
Nov 30, 2010
#!/usr/bin/perl

Anjow posted:

Does anyone know why httpd is giving this error or how I can fix it? At present, the virtual IP is assigned to no interface on either node.
Can you dump the heartbeat init script on pastebin or something? I suspect the author is mistaken.

evol262
Nov 30, 2010
#!/usr/bin/perl

FeloniousDrunk posted:

Nope, no floats in what I have to work with, but whoever made the Access db thought it would be fun to have columns titled "Student?" and other stuff that MySQL doesn't like so I just munged the column names and added in backtick quoting for the export because elsewhere they were using some MySQL keyword as a column name.

Also something about boolean fields being 'Y' or 'N' I think, but it was a while ago.

IIRC, FALSE is 0, and TRUE is -1 due to idiotic VB bitwise booleans (and Jet being written in VB).

angrytech
Jun 26, 2009

evol262 posted:

That's not how you configure BIND. I don't see reverse DNS either, but eh.
code:
$TTL 86400
mydomain.net.     IN   SOA mydomain.net. root.mydomain.net. {
#serial and poo poo here
                       }
                  IN   NS ns1.domain.net.
                  IN   MX mail.mydomain.net.

ns1               IN   A $ip_address
mail              CNAME  ns1

In this line:
code:
ns1               IN   A $ip_address

to which ip should $ip_address be pointing at?

Sir Sidney Poitier
Aug 14, 2006

My favourite actor


evol262 posted:

Can you dump the heartbeat init script on pastebin or something? I suspect the author is mistaken.

http://pastebin.com/B02URkr2

It does seem strange to me that one wouldn't have to have that IP assigned to anything. Some other tutorials make reference to using ipvs to deal with the IPs.

Stathol
Jun 28, 2008

angrytech posted:

Does anyone know anything about bind? I'm setting it up on my server and it seems to be having problems.

evol262 posted:

That's not how you configure BIND. I don't see reverse DNS either, but eh.

Here's a few other (sometimes) related BIND administration tips that I've learned the hard way:

  • Always remember to increment the zone serial when you change a zone. This is super important if you have any secondaries.

  • Whenever you manually edit a zone file, run "named-checkzone" before you try to reload the zone. This will make sure that your syntax is correct and that you have all the required entries before it's too late.

  • Similarly, if you edit any of the bind config files, run "named-checkconf". This is even more important because if you try to reload the bind service with an error in your config file, I'm pretty sure it self-terminates.

  • Rather than editing zone files, learn to use the "nsupdate" command. If BIND doesn't like your updates, it will just refuse to accept them, rather than messing up your zone file. You also don't have to worry about remembering to increment the serial if you use nsupdate. If you are making multiple updates, batch them up and send them all in one operation. I.e. don't make your primary generate 12 notifies because you changed 12 A records. Even better, do all of this in a little script file so that you have an easy way to correct errors, and a paper trail of exactly what you changed in each operation.

  • Learn to use "rndc".

  • If you do dynamic DNS updates (for instance with "nsupdate"), freezing the zone with "rndc freeze" will make it commit the zone .jnl back to the zone .db so that you can manually edit it, or just the view the zone as it currently stands.

  • code:
    logging {
        category lame-servers { null; };
        category edns-disabled { null; };
    };
    
    This will make your BIND server be far less spammy in the syslog, as these events are very common and virtually never relevant.

  • Don't try to use BIND as primaries for Windows domains. It *can* be done, but my God it's painful and twitchy. At a bare minimum, delegate the _msdcs, _tcp, etc. sub-zones to MSDNS. It's just not worth it. I speak from experience.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Misogynist posted:

I think that just means that if you stop cron at 11:59:59 and start it at 12:00:00, it won't run jobs for 12:00. It should still try to run jobs if for some reason the process scheduler doesn't run it during the minute it's supposed to.

Ya, you are right. I got excited when that I saw almost exactly what I was looking for in that comment. From what I can tell, it will continuously process more jobs in a loop until the time to sleep is greater than 0 seconds.

angrytech
Jun 26, 2009
Boom! Got it working. Thanks evol262, you put me on the right track; and stathol for showing me named-checkzone, which is way easier than restarting the damned server just to look for error messages.
My /etc/bind/zones/mydomain.net.db now has this:
code:
//a bunch of lines of stuff
mydomain.net.	IN      NS              mydomain.net.
mydomain.net.	IN      MX     0        mydomain.net.
mydomain.net.	IN	A		ip.of.my.server
which seems to work

Inquisitus
Aug 4, 2006

I have a large barge with a radio antenna on it.

ClosedBSD posted:

I know it's possible to manually upgrade to unstable without a moving to a new kernel, but once again I'm going to have to recommend using smxi to do it - it just makes things so easy.

I used it last night to push my VPS to unstable for the Mumble server I was setting up, and it gave me the option of leaving my old kernel. Sure enough, as I check it now, I am still using my provider's Xen-optimized kernel; uname -a says it's 2.6.39-linode33.


Edit: nvm maybe I left it on Debian stable, lemme see if I can figure it out

I've just tried smxi (having updated sources.list appropriately) but it just seems to choke on the same thing.

The steps I'm taking are:
  • Modify sources.list to point to unstable
  • Run smxi -! 32 (to avoid complaints about boot loader)
Am I doing something wrong?

Inquisitus fucked around with this message at 20:45 on Jun 10, 2011

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Inquisitus posted:

I've just tried smxi (having updated sources.list appropriately) but it just seems to choke on the same thing.

Why not use xen if the provider supports that?

Inquisitus
Aug 4, 2006

I have a large barge with a radio antenna on it.

taqueso posted:

Why not use xen if the provider supports that?

Because it's marginally more expensive :shobon:

If I can't get it working under OpenVZ then I'll just switch to Xen I guess. That said, might there be similar problems with Xen PV, since it requires the kernel to be built for it?

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Inquisitus posted:

Because it's marginally more expensive :shobon:

If I can't get it working under OpenVZ then I'll just switch to Xen I guess. That said, might there be similar problems with Xen PV, since it requires the kernel to be built for it?

I suppose it could be a problem, but maybe you need a different VPS host if it is. Linode provides 2.6.39 for example.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

covener posted:

Remove world-writability from that dir and any files (chmod -R o-w awesome_dir/). Add world-execute (search) to that dir and any below it (find awesome_dir/ -type d | xargs chmod o+rx)

Could you explain why this is necessary?

crazyfish
Sep 19, 2002

Thermopyle posted:

Could you explain why this is necessary?

Because having a world-writeable apache directory is just asking for someone to login to your box and drop whatever they want to be served up by apache. If you want to do the permissions right, in addition to doing the previously stated, create a group for all the users that need write access to the directory and change the directory's group appropriately.

crazyfish fucked around with this message at 23:24 on Jun 10, 2011

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

crazyfish posted:

Because having a world-writeable apache directory is just asking for someone to login to your box and drop whatever they want to be served up by apache. If you want to do the permissions right, in addition to doing the previously stated, create a group for all the users that need write access to the directory and include www-data (or whatever your apache user is) in that group (though I don't think apache even needs write access to it, and covener's procedure should be sufficient to allow apache to work).

Well, yeah. I was planning on fixing that up anyway. I assumed he was answering the question I asked.

You pointed out the correct answer, though. I forgot that I could just add www-data to a new group...thanks!

Malfeasible
Sep 10, 2005

The sworn enemy of Florin
I am curious about file modification times for downloaded files.

I was on Facebook today and I downloaded an image a friend posted. When I looked for the image in a terminal by typing "ls -ltr" expecting it to be at the bottom of the list it was at the top, with a last-modified date of 2007-12-31 19:00! so I checked my clock and calendar and they were accurate, I touched a file and it's timestamp changed correctly, and I created a file in VIM in that same directory and it's last modified date was correct. So why is the date-time stamp so horribly wrong for this newly created file? I downloaded it with wget using Opensuse 11.2 and the picture was just taken the other day.

I just downloaded the Something Awful "hot" tag with wget at http://forumimages.somethingawful.com/forums/posticons/icon-31-hotthread.gif ...and it has a timestamp of 2004-03-19 13:23.

Is that the same date one would see if doing an "ls -l" on the server? I could understand that for the hot tag, but why would a picture taken yesterday seem to be so old? Does it matter if I download onto my desktop with a linux OS from a server with a non-linux OS?

Kenfoldsfive
Jan 1, 2003

The un-bitey-ness of a chicken's head and the "I don't want to cook that"-ness of a dog's body
This is a isc-dhcpd question, which I suppose isn't specific to Linux, but it seems the most applicable:

I'm running two dhcpd servers in failover mode, both serving a single subnet (x.x.5.0) with a single pool of addresses. Everything is happy.

The problem is now that I'm setting it up to respond to DHCP relays (actual relay agent is a Cisco router). I added a second subnet declaration (x.x.6.0) and a second address pool on both servers, and again, all seemed happy.
code:
shared-network foo {
        option  domain-name "foo.com";
        option  domain-name-servers x.x.x.x
        authoritative;
        subnet x.x.5.0 netmask 255.255.255.0 {
                option routers  x.x.5.1;
                option subnet-mask 255.255.255.0;
                option broadcast-address x.x.5.255;
                pool {
                        range x.x.5.111 x.x.5.220;
                        failover peer "peer";
                }
        }

         subnet x.x.6.0 netmask 255.255.255.0 {
                option routers x.x.6.1;
                option subnet-mask 255.255.255.0;
                option broadcast-address x.x.6.255;
                pool {
                        range x.x.6.20 x.x.6.100;
                        failover peer "peer";
                }
        }
However when I request an address from my client on the x.x.6.0 subnet, my client requests an old address from the x.x.5.0 address, and rather than NACKing it as an invalid address, dhcpd oks it:

code:
DHCPREQUEST for x.x.5.120 from 00:00:00:1a:00:48 (FOO) via x.x.6.1
DHCPACK on x.x.5.120 to 00:00:00:1a:00:48 (FOO) via x.x.6.1
NACKing an out of scope IP address is a pretty common behavior - I can't imagine ISC overlooked it. What am I missing here?

Adbot
ADBOT LOVES YOU

bort
Mar 13, 2003

This is probably just a mistake in your post, but:
code:
        option  domain-name-servers x.x.x.x   <---
        authoritative;
is missing a semicolon. If that's in your config file, it might not be realizing it's authoritative for the shared-network and wouldn't send NAKs.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply