Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
roadhead
Dec 25, 2001

They are not fooling around with security in FreeBSD.

I just finished installing the 8.0 RC and am currently left without root access to the box :)

I had to take several steps to even enable SSH and things like that. Fine. But I did not expect that I would not be able to 'su' to root without adding myself to a particular group.

And since I can't get to root, I don't have the permissions to do the fix I found.

Of course I can drag a monitor and keyboard over to the thing (again) and login there as root (I hope!) and fix it but WOW these guys are careful. If you get your FreeBSD box remotely rooted you have to go out of your way to let someone have even the oppuortunity!



roadhead fucked around with this message at 02:27 on Oct 8, 2009

Adbot
ADBOT LOVES YOU

roadhead
Dec 25, 2001

Bob Morales posted:

Dare I ask what the 20 hard drives are for?

Only hooking up 10 right away - but to answer your question I want a very large Raid-Z2 pool for everything I have currently on optical media.

The other 10 bays are for when I want to put in a second Raid-Z2 and mirror the first one!

adorai posted:

he's probably building a home virtualization environment and needs 20 spindles worth of IO.

The machine has a Phenom II x 3 705e with 4 gigs of DDR2-800 on a MA785G-UD3H. I still need a few more PCI-E 4 and 2 port SATA cads to hook all 20 bays up actually, right now only the first 8 are hooked into the anything besides a fan-out cable. I won't actually be storing anything interesting, except maybe all the home-grown ripped from my Sony Digital8 camera that will never ever see the light of day!

adorai posted:

requiring users to be in the wheel group in order to su to root from ssh has been in freebsd for a long time. I always think it's odd when I install redhat or centos and don't have to be in the wheel group, in fact, I can just ssh in as root!

Forgot to mention this is my first Foray into BSDLand :)

roadhead fucked around with this message at 16:21 on Oct 10, 2009

roadhead
Dec 25, 2001

Got myself some wheels!

Installing mencoder - FreeBSD has a high "scrolling text" in response to commands ratio, all I typed was "make" and the bitch has been going crazy for a bit now.

roadhead
Dec 25, 2001

Anyone

code:
freebsd-update -r 8.0-RC2 upgrade
yet?

roadhead
Dec 25, 2001

This is a fun one. I was having trouble with an in-place upgrade, so decided to do a fresh 8.0RC2 install since my previous RC1 install was my first BSD install ever, and I now had much better idea of the packages I would actually be requiring.

In my due diligence I export my main Raid-Z2 "zpool export storage" - thinking this would make it easier to correctly import the pool into the new install.

I could not have been more wrong.

The installs are wigging out early on, with the screen getting all weird and never prompting me. I blame this on the dodgy IDE dvd-rom I've been using, and just go back to the still untouched RC1 install (I was trying the clean install on a different drive) - and I try and import my array.

Invalid vdev configuration! HURRAY! I try -f. Same. All the discs show online, but the pool just can NOT be re-imported.

One thing that might factor in here, is when the pool was created, I had /dev/ad20 and /dev/ad22 (in addition to 6-18 by even numbers) - at some point I apparently did something in the BIOS to cause 20 and 22 to become 3 and 4. Strangely the ZFS didn't flinch, I didn't even have to re-silver, it just kept on trucking. It was in this state of vdev labels or whatever when I exported it.

I have since the data loss investigated the option I changed, and can now have the drives appear as either set of names with a reboot and a trip into the BIOS. That doesn't help.

I've pretty much resigned myself to total data loss at this point (it WAS the back-up, so I have to go about the task of filling it again. A lot of what I backed up to it is still on DVD-R) but have not yet created a new pool with the disks because I haven't spent enough time troubleshooting it (none really) and there is the possibility someone might still be able to help me :)

So uhhh guys. Whats your prognosis? Have I wasted 4 weeks of time that I spent uploading data onto this server, or can this thing be re-imported?

roadhead
Dec 25, 2001

SamDabbers posted:

Have you tried booting off of an OpenSolaris live CD? It might be able to mount your zpool where the FreeBSD port of ZFS wasn't able to. It's not a solution by any stretch, but it would tell you if your zpool is corrupt or if you've encountered a particularly nasty bug.

Good idea, heading to grab the ISO now :)

roadhead
Dec 25, 2001

If anyone remembers my problem (exported Raid-Z2 pool under 8.0RC1 - never to be able to import again) I finally got opensolaris booting (got a new IDE optical drive in) and it has an even lower opinion of the pool that FreeBSD does.

Opensolaris posted:

jack@opensolaris:~# zpool import
pool: storage
id: 8762583492932802839
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:

storage UNAVAIL insufficient replicas
raidz2 UNAVAIL insufficient replicas
c7t0d0s8 UNAVAIL corrupted data
c7t1d0s8 UNAVAIL corrupted data
c8t0d0s8 UNAVAIL corrupted data
c8t1d0p0 ONLINE
c9t0d0s2 ONLINE
c9t1d0s8 UNAVAIL corrupted data
c9t2d0s8 UNAVAIL corrupted data
c9t3d0s8 UNAVAIL corrupted data
c9t4d0s2 ONLINE
c9t5d0s8 UNAVAIL corrupted data

8.0RC1 posted:

hydra# zpool import
pool: storage
id: 8762583492932802839
state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

storage UNAVAIL insufficient replicas
raidz2 UNAVAIL corrupted data
ad4 ONLINE
ad6 ONLINE
ad8 ONLINE
ad10 ONLINE
ad12 ONLINE
ad14 ONLINE
ad16 ONLINE
ad18 ONLINE
ad20 ONLINE
ad22 ONLINE


I think the devices are just being enumerated incorrectly - so the labels written to the disks during export don't match what its finding now. Is there anyway to edit/move these around manually?

Solaris just thinks 7 of the devices are in the wrong place (I think?) or maybe the drives really are all rear end end up?

roadhead fucked around with this message at 00:29 on Nov 10, 2009

roadhead
Dec 25, 2001

SamDabbers posted:

The import action takes into account that the disks may not have the same /dev nodes as they did when they were exported. The ZFS labels written to disk don't contain the /dev paths of the disks when the zpool was exported. They contain, among other metadata, the name/UUID of the zpool and the UUIDs of the other member disks. As long as all member disks are present in the system when you go to import the zpool, ZFS should be able to figure out the stripe order of the zpool from the labels.

I hate to say it, but it looks like your zpool is hosed. :(

This is what I get for using ZFS ported to a release candidate I guess :) I booted into 8.0RC1 just after looking at it in Solaris to get that report, why are they of such vastly differing opinion as to the health of the individual drives in the array?

roadhead
Dec 25, 2001

SamDabbers posted:

It looks like they both have the same opinion of the state of the zpool, but the OpenSolaris implementation (which is a few revisions ahead of the FreeBSD port) gives you more detail as to which drives contain corrupt data.


This won't work. According to the ZFS Administration Guide the only way to recover from this type of failure is to recreate the zpool and repopulate the data from a backup.


I've resigned myself to building a new array and transferring those 3 TB back over the network again, but I really wish I had left the pool alone rather than exporting it :( cest la vie.

roadhead
Dec 25, 2001

Just got one of these to replace a 16 GB CF card and one of these - after my first CF to IDE adapter BLEW OUT A TRACE on the PCB.

The Transcend SSD is here, but I haven't used it yet as after putting my CF card in the new adapter, FreeBSD 8.0 kept on trucking, and the array was unphased - pretty amazing considering the hard-drive with the OPERATING SYSTEM on it essentially disappeared out from under it, causing a dirty shutdown once I noticed its LED's were off, though both my SSH sessions had already become unresponsive and it would no longer respond to a ping.

Can I add the Transcend SSD with the 16GB CF drive in some sort of strange Raid-1 - half the 32 gig SSD in the RAID and half un-used or formatted for swap?

roadhead
Dec 25, 2001

w_hat posted:

I can't believe an export caused that. I'm definitely sticking with OpenSolaris now.

After extensive testing it looks like you are right. While trying to track down stability issues that ultimately I determined to be caused by using 4 2 gig DDR2 DIMMs, I also accidentally unlocked the 4th core on my Phenom II 705e, and I believe it was during these shenanigans that I hosed up the array.

Also while that 4th core was active nothing on the box worked quite right :downs:

So the box is great now, albeit with 4 gigs of RAM instead of 8, and FreeBSD/ZFS is awesome.

roadhead
Dec 25, 2001

code:
Mem: 716M Active, 872M Inact, 839M Wired, 3472K Cache, 405M Buf, 1386M Free
Swap: 4096M Total, 12M Used, 4083M Free

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
  950 root          1 118    0  8932K  4148K CPU1    1  58.6H 100.00% mountd
55266 root         55  44    0  1007M   595M fifoor  1  28:19  0.00% java
22893 root         20  64    0   183M   119M uwait   1  11:08  0.00% python2.6
WTF is up with mountd?

code:
hydra# uname -a
FreeBSD hydra.home.biggestpos.com 8.0-RELEASE-p1 FreeBSD 8.0-RELEASE-p1 #0: Fri Dec 11 13:33:41 CST 2009     [email]robpayne@hydra.home.biggestpos.com[/email]:/usr/obj/usr/src/sys/HYDRA  amd64

hydra# zpool status
  pool: storage
 state: ONLINE
 scrub: scrub completed after 5h48m with 0 errors on Tue Mar 23 14:21:55 2010
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            ad4     ONLINE       0     0     0  99.0M repaired
            ad6     ONLINE       0     0     0  125M repaired
            ad8     ONLINE       0     0     0  48K repaired
            ad10    ONLINE       0     0     0
            ad12    ONLINE       0     0     0
            ad14    ONLINE       0     0     0
            ad16    ONLINE       0     0     0
            ad18    ONLINE       0     0     0
            ad20    ONLINE       0     0     0
            ad22    ONLINE       0     0     0

errors: No known data errors

hydra# uptime
 8:52AM  up 28 days, 20:47, 2 users, load averages: 1.00, 1.00, 1.00

Can I just kill it? That would probably be bad right?

roadhead
Dec 25, 2001

jandrese posted:

mountd is for servicing new nfs mount requests, it can be safely restarted, and in fact probably should since it appears to have gotten itself stuck.

Thanks, restarted it and it appears to be consuming a normal amount of CPU time now.

roadhead
Dec 25, 2001

SmirkingJack posted:

I was mostly looking for software suggestions. I didn't look at the handbook too closely since it looked like a general DNS guide and more of a Bind how-to, but I went back and set it up. As it turns out, it was far less scary and complicated than I thought it would be. Thanks, everyone, for the suggestions!

Thats because *NIX and Bind ARE the general use DNS :)

roadhead
Dec 25, 2001

I have a RaidZ2 Zpool with 10 devices, Western Digital Green Drives.

Today I decided, in my infinite idiocy to bring the box down and run WDTLER on the drives, as currently the array would stall long enough to cause a kernel panic at times, and I thought this would be a fix.

9 drives have no problems, but WD-WMAVU0467050 says "can't be set" - I think WD removed the ability to change this from newer drives, as the remaining 9 all have much lower serial numbers.

With all but 1 drive changed to a TLER of 7 seconds (read and write) I boot back into FreeBSD 8.

And I can't mount, import, or really do anything with my array except see this:

code:
hydra# zpool import
  pool: storage
    id: 1927227762911526040
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

        storage     UNAVAIL  insufficient replicas
          raidz2    UNAVAIL  corrupted data
            ad4     ONLINE
            ad6     ONLINE
            ad8     ONLINE
            ad10    ONLINE
            ad12    ONLINE
            ad14    ONLINE
            ad16    ONLINE
            ad18    ONLINE
            ad20    ONLINE
            ad22    ONLINE
hydra# zpool import storage
cannot import 'storage': invalid vdev configuration
hydra# zpool import 1927227762911526040
cannot import 'storage': invalid vdev configuration

I even tried a "zpool destory storage" but it says the pool doesn't exist!

So can changing the tler setting of a drive change it enough for Zpool to wig out?

roadhead
Dec 25, 2001

netmazk posted:

I can't recall to what degree FreeBSD's ZFS implementation relies on disk signatures instead of drive numbers (we use glabels on our ZFS arrays), but its possible that you shuffled the drives around and it is "confused" as to what is what.

A quick glance around the OpenSolaris lists suggests that shuffling drives without first exporting the pool can have the same consequence. I would try this now:
# zpool export storage
# zpool import storage

Forcing the export may get it out of the half-state its in and force ZFS to re-taste the disks and learn what goes where. If that fails, you may have to boot a recent OpenSolaris image and mess around with zdb(1).

I didn't move the drives around, in fact I didn't move the drives at all. Everything is in the same place on the same ports, I just ran wdtler on them, thats the only difference.

I can't export

code:
hydra# zpool import
  pool: storage
    id: 1927227762911526040
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

        storage     UNAVAIL  insufficient replicas
          raidz2    UNAVAIL  corrupted data
            ad4     ONLINE
            ad6     ONLINE
            ad8     ONLINE
            ad10    ONLINE
            ad12    ONLINE
            ad14    ONLINE
            ad16    ONLINE
            ad18    ONLINE
            ad20    ONLINE
            ad22    ONLINE
hydra# zpool export storage
cannot open 'storage': no such pool

And my ISP took a poo poo on me this afternoon and I had to tether my phone to post this :) Once the cable modem is back on-line I'll get the latest OpenSolaris liveCD and try zdb :)

roadhead
Dec 25, 2001

FISHMANPET posted:

Well you can't export it because it's not imported. Although I can't see why it thinks your vdev is incomplete. That being said, if you don't give a poo poo about any data on the pool, just create a new vdev.

And if I care "great shits" about all the data on the pool?

Argh, its all backed up, but none of it is on-line or capable of being pulled faster than 100 megabit (at best), so I would love to save this one if possible, since its over 5 TB of stuff :)

Also "zpool status" says something about not being able to initialize ZFS lib when I run as a non-root user, I remember being able to run zpool before as a non-privileged user, is it because its not mounting or un-related?

ZDB on FreeBSD says

code:
hydra# zdb
cannot open '/boot/zfs/zpool.cache': No such file or directory
and sure enough that file doesn't exist, should it?

roadhead fucked around with this message at 01:40 on Jul 7, 2010

roadhead
Dec 25, 2001

Bob Morales posted:

Wouldn't that be a bad idea?

I changed the head-parking timing a long time ago, and I believe the array even rebooted and was no worse for the wear.

How changing the TLER is different I am presently un-aware.

Everything was hunky-dory until I tried to make it "better" :(

roadhead
Dec 25, 2001

netmazk posted:

Try running zdb -l /dev/ad4. Repeat for each device. Each drive should successfully display 4 labels which all contain essentially the same information describing the zpool.

Should I be piping these into files for later perusal, or is the act of querying for the info what I'm after?

ad20 was unique with its Label 2 and Label 3 "failing to unpack" I didn't notice that for any of the other devices, is that bad ?

code:
hydra# zdb -l /dev/ad20
--------------------------------------------
LABEL 0
--------------------------------------------
    version=13
    name='storage'
    state=0
    txg=678556
    pool_guid=1927227762911526040
    hostid=1407561558
    hostname='hydra.home.biggestpos.com'
    top_guid=969465251034111238
    guid=12601938019356116885
    vdev_tree
        type='raidz'
        id=0
        guid=969465251034111238
        nparity=2
        metaslab_array=23
        metaslab_shift=37
        ashift=9
        asize=15002970357760
        is_log=0
        children[0]
                type='disk'
                id=0
                guid=8480783778198015884
                path='/dev/ad4'
                whole_disk=0
                DTL=47
        children[1]
                type='disk'
                id=1
                guid=2948080225588684788
                path='/dev/ad6'
                whole_disk=0
                DTL=46
        children[2]
                type='disk'
                id=2
                guid=9319140863036432533
                path='/dev/ad8'
                whole_disk=0
                DTL=45
        children[3]
                type='disk'
                id=3
                guid=8073044400271224919
                path='/dev/ad10'
                whole_disk=0
                DTL=43
        children[4]
                type='disk'
                id=4
                guid=8460640024122198858
                path='/dev/ad12'
                whole_disk=0
                DTL=49
        children[5]
                type='disk'
                id=5
                guid=1792081462916445300
                path='/dev/ad14'
                whole_disk=0
                DTL=42
        children[6]
                type='disk'
                id=6
                guid=4055537292500072897
                path='/dev/ad16'
                whole_disk=0
                DTL=41
        children[7]
                type='disk'
                id=7
                guid=5980344425067716449
                path='/dev/ad18'
                whole_disk=0
                DTL=40
        children[8]
                type='disk'
                id=8
                guid=12601938019356116885
                path='/dev/ad20'
                whole_disk=0
                DTL=107
        children[9]
                type='disk'
                id=9
                guid=17721033744186541552
                path='/dev/ad22'
                whole_disk=0
                DTL=38
--------------------------------------------
LABEL 1
--------------------------------------------
    version=13
    name='storage'
    state=0
    txg=678556
    pool_guid=1927227762911526040
    hostid=1407561558
    hostname='hydra.home.biggestpos.com'
    top_guid=969465251034111238
    guid=12601938019356116885
    vdev_tree
        type='raidz'
        id=0
        guid=969465251034111238
        nparity=2
        metaslab_array=23
        metaslab_shift=37
        ashift=9
        asize=15002970357760
        is_log=0
        children[0]
                type='disk'
                id=0
                guid=8480783778198015884
                path='/dev/ad4'
                whole_disk=0
                DTL=47
        children[1]
                type='disk'
                id=1
                guid=2948080225588684788
                path='/dev/ad6'
                whole_disk=0
                DTL=46
        children[2]
                type='disk'
                id=2
                guid=9319140863036432533
                path='/dev/ad8'
                whole_disk=0
                DTL=45
        children[3]
                type='disk'
                id=3
                guid=8073044400271224919
                path='/dev/ad10'
                whole_disk=0
                DTL=43
        children[4]
                type='disk'
                id=4
                guid=8460640024122198858
                path='/dev/ad12'
                whole_disk=0
                DTL=49
        children[5]
                type='disk'
                id=5
                guid=1792081462916445300
                path='/dev/ad14'
                whole_disk=0
                DTL=42
        children[6]
                type='disk'
                id=6
                guid=4055537292500072897
                path='/dev/ad16'
                whole_disk=0
                DTL=41
        children[7]
                type='disk'
                id=7
                guid=5980344425067716449
                path='/dev/ad18'
                whole_disk=0
                DTL=40
        children[8]
                type='disk'
                id=8
                guid=12601938019356116885
                path='/dev/ad20'
                whole_disk=0
                DTL=107
        children[9]
                type='disk'
                id=9
                guid=17721033744186541552
                path='/dev/ad22'
                whole_disk=0
                DTL=38
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3

roadhead
Dec 25, 2001

FISHMANPET posted:

I only brought it up because you said you tried to destroy the pool, so I assumed there wasn't any data on there.

I don't know much about BSD ZFS, but you could try booting an OpenSolaris live CD and doing the import there to see what happens?

Last time I had troubles with a pool, "destroying" then "zpool import -D" was what fixed it, I don't think its ever really "destroyed" until the disks are assigned to new pools and silvered or something right?


Also I just noticed that the serial number of ad20 is the drive that would not accept new TLER settings (467,xxx and the rest of the drives are 1xx,xxx) - since its a Raidz2 I should be able to lose this drive COMPLETELY (and another!) and still re-silver onto one of my spares right?

roadhead fucked around with this message at 04:32 on Jul 7, 2010

roadhead
Dec 25, 2001

netmazk posted:

Sounds pretty logical, and the zdb output makes it seem like it's the culprit. Have you tried to 'zpool detach' the drive? I don't think it'll let you if the pool isn't imported, but who knows. If it doesn't, I would shutdown, remove the suspected drive, and start it back up and see if it helps. It can't really hurt at this point. FYI, the second two labels (the ones that are missing) are stored at the end of the disk, rather than the beginning.

code:
hydra# zpool detach storage /dev/ad20
cannot open 'storage': no such pool
Yea detach isn't going to work I guess, but I'll power down, remove ad20 and put in a spare. It definitely can't hurt at this point :)

code:

hydra# zpool import
  pool: storage
    id: 1927227762911526040
 state: DEGRADED
status: One or more devices are missing from the system.
action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: [url]http://www.sun.com/msg/ZFS-8000-2Q[/url]
config:

        storage     DEGRADED
          raidz2    DEGRADED
            ad4     ONLINE
            ad6     ONLINE
            ad8     ONLINE
            ad10    ONLINE
            ad12    ONLINE
            ad14    ONLINE
            ad16    ONLINE
            ad18    ONLINE
            ad20    UNAVAIL  cannot open
            ad22    ONLINE
hydra# zpool import storage
Its still currently "importing" with a DRIVE MISSING - but assuming it imports successfully I should be able to-resilver and be working again, right?

Import finished, and I got this

code:

hydra# zpool status
  pool: storage
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: [url]http://www.sun.com/msg/ZFS-8000-2Q[/url]
 scrub: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        storage                   DEGRADED     0     0     0
          raidz2                  DEGRADED     0     0     0
            ad4                   ONLINE       0     0     0
            ad6                   ONLINE       0     0     0
            ad8                   ONLINE       0     0     0
            ad10                  ONLINE       0     0     0
            ad12                  ONLINE       0     0     0
            ad14                  ONLINE       0     0     0
            ad16                  ONLINE       0     0     0
            ad18                  ONLINE       0     0     0
            12601938019356116885  UNAVAIL      0 5.00K     0  was /dev/ad20
            ad22                  ONLINE       0     0     0

errors: No known data errors
So at this point I'm thinking shutdown, put a disk back in ad20, and it should figure out the rest right?

roadhead fucked around with this message at 05:23 on Jul 7, 2010

roadhead
Dec 25, 2001

netmazk posted:

I would take a minute and do a quick wipe of your old "ad20" in another machine. Then plug it back in and treat it just like you are replacing a dead drive with a brand new one.

Yea I tried putting the most recent incarnation of ad20 in there, and it went back to its old "degraded" status :/

Rebooting again without anything in that bay.

So any drive I put in there needs to have the "ZFS Smell" cleaned off first?


With a drive totally missing, I think it even imported on boot this time, I was able to do a "zfs mount storage/stuff" and even get an LS of that filesystem, so everything should be fine as soon as I can get this thing re-silvered onto one of these spares, which I will do tomorrow.

roadhead fucked around with this message at 05:33 on Jul 7, 2010

roadhead
Dec 25, 2001

roadhead posted:

Yea I tried putting the most recent incarnation of ad20 in there, and it went back to its old "degraded" status :/

Rebooting again without anything in that bay.

So any drive I put in there needs to have the "ZFS Smell" cleaned off first?


With a drive totally missing, I think it even imported on boot this time, I was able to do a "zfs mount storage/stuff" and even get an LS of that filesystem, so everything should be fine as soon as I can get this thing re-silvered onto one of these spares, which I will do tomorrow.

Ok When I have a freshly NTFS formatted drive connected up as /dev/ad20 I get UNAVAIL -
code:
hydra# zpool status
  pool: storage
 state: UNAVAIL
 scrub: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        storage                   UNAVAIL      0     0     0  insufficient replicas
          raidz2                  UNAVAIL      0     0     0  corrupted data
            ad4                   ONLINE       0     0     0
            ad6                   ONLINE       0     0     0
            ad8                   ONLINE       0     0     0
            ad10                  ONLINE       0     0     0
            ad12                  ONLINE       0     0     0
            ad14                  ONLINE       0     0     0
            ad16                  ONLINE       0     0     0
            ad18                  ONLINE       0     0     0
            12601938019356116885  ONLINE       0     0     0  was /dev/ad20
            ad22                  ONLINE       0     0     0

And since the pool isn't actually online, none of my replace/remove commands have any effect.

if I just SLIDE IT OUT with the machine powered on and everything, I at least get DEGRADED status -

code:
hydra# zpool status
  pool: storage
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: [url]http://www.sun.com/msg/ZFS-8000-2Q[/url]
 scrub: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        storage                   DEGRADED     0     0     0
          raidz2                  DEGRADED     0     0     0
            ad4                   ONLINE       0     0     0
            ad6                   ONLINE       0     0     0
            ad8                   ONLINE       0     0     0
            ad10                  ONLINE       0     0     0
            ad12                  ONLINE       0     0     0
            ad14                  ONLINE       0     0     0
            ad16                  ONLINE       0     0     0
            ad18                  ONLINE       0     0     0
            12601938019356116885  UNAVAIL      0    43     0  was /dev/ad20
            ad22                  ONLINE       0     0     0

errors: No known data errors
hydra# zpool replace -f storage /dev/ad20
cannot open '/dev/ad20': No such file or directory
hydra# zpool replace -f storage 12601938019356116885
cannot open '12601938019356116885': no such GEOM provider
must be a full path or shorthand device name
hydra# zpool replace -f storage ad20
cannot open 'ad20': no such GEOM provider
must be a full path or shorthand device name
but I can't figure out how to get it ready to accept this "new" drive. It IS a different drive than the previous one, with a much lower serial and TLER-able but it was previously part of the array, just not at this spot.

However I did load the drive up in my gaming rig prior, and after running tler-on, I booted into Win 7 did a GPT/Quick NTFS format.

Did I need a full format? Is there something else I can do to start the re-silver manually?

roadhead
Dec 25, 2001

Anyone want to tell me what happens if I export the pool in this degraded state, reboot; plug ad20 back in so it can be detected, and then try to import the pool?

Should I be exporting on every shutdown?

roadhead
Dec 25, 2001

netmazk posted:

I'm not sure what that will do. I don't think its going to help. I would use DD to write /dev/zero to the first 100MB of the disk just in case the format didn't reach far enough to catch the second label (I have no idea how big they are...).

While the pool is in the degraded state (without ad20 in the box) try to 'zpool offline' the device by using the unique ID. ZFS will remember that through a reboot, so you should be able to boot the box up with the 'new' ad20 and issue a 'zpool replace 12601938019356116885 /dev/ad20'.

ok trying to rid this drive of all its labels, it had a complete set as far as I could tell. The first few dd attempts only got the early ones.

code:
dd if=/dev/zero of=/dev/ad20 bs=1M count=1600000
surely nothing on the disk can survive that?

Ok I pulled AD20 and well...

code:

hydra# zpool status
  pool: storage
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: [url]http://www.sun.com/msg/ZFS-8000-2Q[/url]
 scrub: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        storage                   DEGRADED     0     0     0
          raidz2                  DEGRADED     0     0     0
            ad4                   ONLINE       0     0     0
            ad6                   ONLINE       0     0     0
            ad8                   ONLINE       0     0     0
            ad10                  ONLINE       0     0     0
            ad12                  ONLINE       0     0     0
            ad14                  ONLINE       0     0     0
            ad16                  ONLINE       0     0     0
            ad18                  ONLINE       0     0     0
            12601938019356116885  UNAVAIL      0    56     0  was /dev/ad20
            ad22                  ONLINE       0     0     0

errors: No known data errors
hydra# zpool offline storage 12601938019356116885
cannot offline 12601938019356116885: no valid replicas
hydra# zpool remove storage 12601938019356116885
cannot remove 12601938019356116885: only inactive hot spares or cache devices can be removed
hydra# zpool detach storage 12601938019356116885
cannot detach 12601938019356116885: only applicable to mirror and replacing vdevs
hydra# zpool replace storage 12601938019356116885
cannot open '12601938019356116885': no such GEOM provider
must be a full path or shorthand device name
:/

roadhead fucked around with this message at 03:30 on Jul 8, 2010

roadhead
Dec 25, 2001

netmazk posted:

What happens if you boot up with the drive in place now? Does the zpool come up or go back to being unavail? If it comes up, just do the replace.

If that doesn't work you may have to borrow/buy a drive to replace ad20. This way the unique id won't match, the pool will be degraded, and you can 'zpool replace 12601938019356116885 /dev/ad20'. Theoretically after that you could swap back to the original ad20 and re-run the replace.

With a drive plugged into that port I still get UNAVAIL.

However the drive still can't shake these two labels.

code:

--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
    version=13
    name='storage'
    state=0
    txg=190790
    pool_guid=1927227762911526040
    hostid=1407561558
    hostname='hydra.home.biggestpos.com'
    top_guid=969465251034111238
    guid=3779500033167316302
    vdev_tree
        type='raidz'
        id=0
        guid=969465251034111238
        nparity=2
        metaslab_array=23
        metaslab_shift=37
        ashift=9
        asize=15002928414720
        is_log=0
        children[0]
                type='disk'
                id=0
                guid=8480783778198015884
                path='/dev/ad4'
                whole_disk=0
                DTL=47
        children[1]
                type='disk'
                id=1
                guid=2948080225588684788
                path='/dev/ad6'
                whole_disk=0
                DTL=46
        children[2]
                type='disk'
                id=2
                guid=9319140863036432533
                path='/dev/ad8'
                whole_disk=0
                DTL=45
        children[3]
                type='disk'
                id=3
                guid=8073044400271224919
                path='/dev/ad10'
                whole_disk=0
                DTL=43
        children[4]
                type='disk'
                id=4
                guid=8460640024122198858
                path='/dev/ad12'
                whole_disk=0
                DTL=49
        children[5]
                type='disk'
                id=5
                guid=1792081462916445300
                path='/dev/ad14'
                whole_disk=0
                DTL=42
        children[6]
                type='disk'
                id=6
                guid=4055537292500072897
                path='/dev/ad16'
                whole_disk=0
                DTL=41
        children[7]
                type='disk'
                id=7
                guid=5980344425067716449
                path='/dev/ad18'
                whole_disk=0
                DTL=40
        children[8]
                type='disk'
                id=8
                guid=3779500033167316302
                path='/dev/ad20'
                whole_disk=0
                DTL=39
        children[9]
                type='disk'
                id=9
                guid=17721033744186541552
                path='/dev/ad22'
                whole_disk=0
                DTL=38
--------------------------------------------
LABEL 3
--------------------------------------------
    version=13
    name='storage'
    state=0
    txg=190790
    pool_guid=1927227762911526040
    hostid=1407561558
    hostname='hydra.home.biggestpos.com'
    top_guid=969465251034111238
    guid=3779500033167316302
    vdev_tree
        type='raidz'
        id=0
        guid=969465251034111238
        nparity=2
        metaslab_array=23
        metaslab_shift=37
        ashift=9
        asize=15002928414720
        is_log=0
        children[0]
                type='disk'
                id=0
                guid=8480783778198015884
                path='/dev/ad4'
                whole_disk=0
                DTL=47
        children[1]
                type='disk'
                id=1
                guid=2948080225588684788
                path='/dev/ad6'
                whole_disk=0
                DTL=46
        children[2]
                type='disk'
                id=2
                guid=9319140863036432533
                path='/dev/ad8'
                whole_disk=0
                DTL=45
        children[3]
                type='disk'
                id=3
                guid=8073044400271224919
                path='/dev/ad10'
                whole_disk=0
                DTL=43
        children[4]
                type='disk'
                id=4
                guid=8460640024122198858
                path='/dev/ad12'
                whole_disk=0
                DTL=49
        children[5]
                type='disk'
                id=5
                guid=1792081462916445300
                path='/dev/ad14'
                whole_disk=0
                DTL=42
        children[6]
                type='disk'
                id=6
                guid=4055537292500072897
                path='/dev/ad16'
                whole_disk=0
                DTL=41
        children[7]
                type='disk'
                id=7
                guid=5980344425067716449
                path='/dev/ad18'
                whole_disk=0
                DTL=40
        children[8]
                type='disk'
                id=8
                guid=3779500033167316302
                path='/dev/ad20'
                whole_disk=0
                DTL=39
        children[9]
                type='disk'
                id=9
                guid=17721033744186541552
                path='/dev/ad22'
                whole_disk=0
                DTL=38
So that dd didn't quit get ad20 cleared up?

roadhead
Dec 25, 2001

feld posted:

8.1 has been out since 7/20. I've already upgraded a few servers. Don't see any issues so far anyway :)

Anyone got tips for upgrading to 8.1 from 8.0, with a custom kernel?


code:

WARNING: This system is running a "hydra" kernel, which is not a
kernel configuration distributed as part of FreeBSD 8.0-RELEASE.
This kernel will not be updated: you MUST update the kernel manually
before running "/usr/sbin/freebsd-update install".

The following components of FreeBSD seem to be installed:
src/base src/bin src/cddl src/contrib src/crypto src/etc src/games
src/gnu src/include src/krb5 src/lib src/libexec src/release src/rescue
src/sbin src/secure src/share src/sys src/tools src/ubin src/usbin
world/base world/catpages world/dict world/doc world/games world/info
world/lib32 world/manpages world/proflibs

The following components of FreeBSD do not seem to be installed:
kernel/generic

roadhead
Dec 25, 2001

Just updated to version 0.5.3 of SABnzbdplus, (http://www.freshports.org/news/sabnzbdplus/)

and fixed a couple of package problems with cherrypy by reinstalling, but it still can't launch and gets the following error :

code:
hydra# SABnzbd.py
Traceback (most recent call last):
  File "/usr/local/bin/SABnzbd.py", line 63, in <module>
    import sabnzbd
  File "/usr/local/lib/python2.6/site-packages/sabnzbd/__init__.py", line 66, in <module>
    import sabnzbd.nzbqueue as nzbqueue
  File "/usr/local/lib/python2.6/site-packages/sabnzbd/nzbqueue.py", line 37, in <module>
    import sabnzbd.assembler
  File "/usr/local/lib/python2.6/site-packages/sabnzbd/assembler.py", line 40, in <module>
    import sabnzbd.postproc
  File "/usr/local/lib/python2.6/site-packages/sabnzbd/postproc.py", line 41, in <module>
    import sabnzbd.emailer as emailer
  File "/usr/local/lib/python2.6/site-packages/sabnzbd/emailer.py", line 218, in <module>
    from email.Message import Message
  File "/usr/local/lib/python2.6/site-packages/sabnzbd/email.py", line 39, in <module>
ImportError: cannot import name SplitHost

Anyone else having issues?

EDIT:


Ok if you go and clean out

/usr/local/lib/python2.6/site-packages/sabnzbd/*

Manually, THEN make reinstall clean the port, it works fine.

Will remember this in the future :)

roadhead fucked around with this message at 21:45 on Aug 28, 2010

roadhead
Dec 25, 2001

code:

Oct  2 05:00:00 hydra newsyslog[30326]: logfile turned over due to size>100K
Oct  2 11:51:00 hydra sshd[31151]: Invalid user oracle from 59.49.16.199
Oct  2 11:51:03 hydra sshd[31153]: Invalid user test from 59.49.16.199
Oct  2 11:52:24 hydra sshd[31185]: Invalid user oracle from 125.65.207.10
Oct  2 11:52:27 hydra sshd[31187]: Invalid user test from 125.65.207.10
Oct  2 13:00:47 hydra sshd[31364]: Invalid user oracle from 218.28.36.235
Oct  2 13:00:51 hydra sshd[31366]: Invalid user test from 218.28.36.235
Oct  3 01:02:04 hydra sshd[32816]: Did not receive identification string from 202.213.156.232
Oct  3 01:02:17 hydra sshd[32819]: Invalid user admin from 202.213.156.232
Oct  3 01:02:19 hydra sshd[32821]: Invalid user test from 202.213.156.232
Oct  3 01:02:23 hydra sshd[32825]: Invalid user ghost from 202.213.156.232
Oct  3 01:02:28 hydra sshd[32831]: Invalid user guest from 202.213.156.232
Oct  3 01:02:30 hydra sshd[32833]: Invalid user ghost from 202.213.156.232
Oct  3 01:02:32 hydra sshd[32835]: Invalid user magnos from 202.213.156.232
Oct  3 01:02:38 hydra sshd[32841]: Invalid user aaron from 202.213.156.232
Oct  3 01:02:45 hydra sshd[32843]: Invalid user jun from 202.213.156.232
Oct  3 01:02:47 hydra sshd[32845]: Invalid user rebecca from 202.213.156.232
Oct  3 01:02:49 hydra sshd[32847]: Invalid user einstein from 202.213.156.232
Oct  3 01:02:51 hydra sshd[32849]: Invalid user anna from 202.213.156.232
Oct  3 01:02:53 hydra sshd[32851]: Invalid user sara from 202.213.156.232
Oct  3 01:02:57 hydra sshd[32855]: Invalid user magnos from 202.213.156.232
Oct  3 01:03:01 hydra sshd[32859]: Invalid user amy from 202.213.156.232
Oct  3 01:03:03 hydra sshd[32861]: Invalid user amy from 202.213.156.232
Oct  3 01:03:17 hydra sshd[32867]: Invalid user tracy from 202.213.156.232
Oct  3 01:03:20 hydra sshd[32871]: Invalid user controller from 202.213.156.232
Oct  3 01:03:24 hydra sshd[32875]: Invalid user emily from 202.213.156.232
Oct  3 01:03:31 hydra sshd[32879]: Invalid user backuppc from 202.213.156.232
Oct  3 01:03:33 hydra sshd[32881]: Invalid user backuppc from 202.213.156.232
Oct  3 01:03:47 hydra sshd[32893]: Invalid user amavisd from 202.213.156.232
Oct  3 01:03:49 hydra sshd[32895]: Invalid user edu from 202.213.156.232
Oct  3 01:03:51 hydra sshd[32897]: Invalid user edu from 202.213.156.232

Uhh, so this has been going on for like, almost a year, looking at my /var/log/auth.log - good thing I only have 2 user accounts on the box, both have excellent passwords, but drat.

Freaks me out not noticing this sooner. No more letting SSHD listen on the default port!

roadhead
Dec 25, 2001

Bob Morales posted:

There are other ways to go about it other than changing the default port. You'll just get scanned and then they'll try that port anyway.

Change to keys instead of interactive logins, for one.

You would be right with a dedicated attack, but I think these are lazy Chinese hackers with a collection of pilfered user/pass combos just hammering every box out there that answers on port 22. Expanding that to scanning all ports on every IP for listening services multiples the required traffic about 65,536 times.

I'll definitely be checking the auth.log more often, and if they do bother to discover the new port, additional measures will be taken.

roadhead
Dec 25, 2001

Which Java run-time is the best? I use PS3 Media Server on my box and presently its on diablo-jdk1.6.0 - but I thought I heard somewhere that OpenJDK was better? Anyone have an opinion? :)

roadhead
Dec 25, 2001

Finally getting around to trying to actually use OpenVPN - and I can load pages served by the BSD box across the link, other machines on the LAN can ping the VPN IP of the server.

But I can't ping other machines on the LAN with the client, or ping the client from the LAN.

I added a route to my gateway to direct 192.168.254.0/8 traffic to 192.168.1.2 - the local IP of the BSD box. I can ping 192.168.254.1 from either side, LAN or VPN - but not say 192.168.254.6 which is the IP my client is getting. I can ping 192.168.1.2 from the VPN, but not 192.168.1.1 or anything else.

Must be a routing/firewall thing I've yet to configure eh?

UPDATE: Crashed Apache on the gateway using the web interface to change the metric on the only static route I've put on the device. Telnet in and view the routes, and suddenly it decides to work. I guess it wasn't fully set/needed a bump. Of course this was a problem with the device in the equation running Linux!



EDIT: Ok I can ping, and FileZilla will SFTP (over VPN, seems like overkill!) but won't vanilla FTP. I can use windows CLI FTP. I have also set the DNS IPs, tested them using nslookup, and can get DNS resolution via nslookup, but not just using ping at a cmd prompt. Could this be a problem with my config? Its an IP Tunnel using UDP.

EDIT2: Discovered how to push DHCP options like the DNS suffix to my windows clients, and DNS resolution is working great now. I much prefer http://camera/ to http://whatever.dyndns.org:8080/ - but I am just weird I guess :)

roadhead fucked around with this message at 15:38 on Jan 17, 2011

roadhead
Dec 25, 2001

This is why you setup several "datasets" under the one volume, each can have different ZFS options, such as which hash to use, compression and what level, and lots of other stuff.

roadhead
Dec 25, 2001

Marinmo posted:

1,5TB Seagate Barracuda 7200rpm, so it's not the green edition stuff that's hampering me.

I copied some already compressed files in this case. Haven't tried uncompressed stuff. The CPU is a C2D 6750 with 4 gigs of ram, should be plenty, no?

Nopes, volume was empty

I did skim through the manpages when creating the volume, but it's not like they say "use this for this and that" etc. Could you be a little more specific please? I don't really see the point of doing RAID if you're gonna split volumes anyway (but I would like to learn more about how to use ZFS properly!); I had smaller hard drives in this machine before w/o LVM and it was just gruesome.

Well I don't really need to enable it, just thought it was a neat idea. Almost slicing speeds by three was not acceptable, so I just removed it. :)

All the datasets pull from the same pool of free disk space, its just that you can have different options for each one depending on the kind of data you are storing.

Look at my free space for instance

code:
Filesystem        Size    Used   Avail Capacity  Mounted on
storage2          807G     35K    807G     0%    /storage2
storage           5.3T     38K    5.3T     0%    /storage
storage2/stuff    6.5T    5.7T    807G    88%    /storage2/stuff
storage2/docs     911G    104G    807G    11%    /storage2/docs
storage2/bin      843G     36G    807G     4%    /storage2/bin
storage/stuff     5.3T     64K    5.3T     0%    /storage/stuff
I edited out all the non ZFS file systems. but stuff, docs, and bin all have different ZFS settings, but are in the same pool and share free space.

roadhead
Dec 25, 2001

conntrack posted:

I had two axe devices that simply burnt out after being online for a month. After it happened two times i just gave up the usb plan.

I thought GigE performance on my re0 device was bad (no jumbo frame support in the FreeBSD drive) - but at least its stable!

roadhead
Dec 25, 2001

complex posted:

8.2-RELEASE came out on the FTPs on Sunday. Please use a mirror.

'freebsd-update fetch' would choose a mirror at random, correct ?

roadhead
Dec 25, 2001

IanMalcolm posted:

Guys, I have a ZFS problem here. I'm currently running FreeNAS 0.7, and the new 8-RC1 version came out recently with a new ZFS version. In the readme they say that it is not possible to upgrade an existing zpool, doing so would destroy the data.
Is that a problem with FreeNAS or with FreeBSD? I want to install vanilla FreeBSD 8.2 on my NAS box, but I don't want to lose those ~3TB of data...

That is truly strange, both my pools went through zpool upgrade with all their data intact. However this is FreeBSD 8.1->8.2 and I believe ZFS version 14->15 - Not sure why FreeNAS would delete all your data...

roadhead
Dec 25, 2001

Ok I finally got around to the rest of the steps on the way to finishing up my 9.0 upgrade.


I did the "shutdown -r now" part (really the thing that made me keep putting it off, didn't want to lose the up-time!) and started the "portupgrade -af" part earlier today... on a single user session... when I am ALWAYS OTHERWISE USING TMUX.

The reason I wasn't using tmux is because well - uhhh i was re-building all my ports and was going to do another "freebsd-update install" soon so I thought a single actual SSH session would be fine.

Of course this compiling went on ALL DAY TODAY. And when 5 rolled around I stupidly just shut my notebook (with the putty session on it) like I always do, cause hey, tmux right?

When I got home and ssh in from my desktop it hit me - I am an amazingly stupid human being who should not be allowed to play with the switches and knobs. I killed my shell in the middle of a "portupgrade -af" surely at least leaving some half-compiled port somewhere on my system.

So uhhh - will it start all over at the beginning you think or be able to pick up where it left off?

roadhead
Dec 25, 2001

Xenomorph posted:

It just appeared on update3/4/5.freebsd.org!

Edit, updating now...

# freebsd-update upgrade -r 9.1

I probably should have known this and been ready, but upgrading to 9.1 from 9.0 wiped out all the "state" in transmission-daemon. All the BSD ISOs I was helping seed have to be added back manually and re-verified.

BSD is super easy to admin but I always forget the little things since I don't generally have to do anything TO the box besides use it 99.9% of the time.

Adbot
ADBOT LOVES YOU

roadhead
Dec 25, 2001

Hey over the weekend 2 of the GPT devices in my ZFS Raid-Z2 decided to disappear. Sort of. I can still pull the smart info from these drives with their adaxx designation, but their (label?) in /dev/gpt/ is missing. And 'zpool status' shows the following -

code:
root@hydra:/dev/gpt # zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: scrub canceled on Sun Jan  6 15:10:29 2013
config:

        NAME                      STATE     READ WRITE CKSUM
        storage                   DEGRADED     0     0     0
          raidz2-0                DEGRADED     0     0     0
            4294506216448113758   REMOVED      0     0     0  was /dev/gpt/bay7
            13165379476280596928  REMOVED      0     0     0  was /dev/gpt/bay8
            gpt/bay9              ONLINE       0     0     0
            gpt/bay10             ONLINE       0     0     0
            gpt/bay11             ONLINE       0     0     0
            gpt/bay12             ONLINE       0     0     0

errors: No known data errors
                                            
Contemplating trying to add the "failed" devices back after recreating their GPT labels I guess, but this is probably a bad idea.

On the other hand with the two problem children failed out that particular array is faster than its been in a long time!

Two fresh drives is probably the right answer here eh?

'gpart show' output appears to be missing for the two devices in question.

I apparently did not write down the correct /dev <-> gpt translations. Great.

'dmesg' says -

code:
(ada3:siisch3:0:0:0): CAM status: Command timeout
(ada3:siisch3:0:0:0): Retrying command
(ada3:siisch3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 40 66 f2 ef 40 00 00 00 00 00 00
(ada3:siisch3:0:0:0): CAM status: Command timeout
(ada3:siisch3:0:0:0): Retrying command
(ada3:siisch3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 a6 ef ef 40 00 00 00 01 00 00
(ada3:siisch3:0:0:0): CAM status: Command timeout
(ada3:siisch3:0:0:0): Retrying command
(ada3:siisch3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 26 f2 ef 40 00 00 00 01 00 00
(ada3:siisch3:0:0:0): CAM status: Command timeout
(ada3:siisch3:0:0:0): Retrying command
(ada3:siisch3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 26 f1 ef 40 00 00 00 01 00 00
(ada3:siisch3:0:0:0): CAM status: Command timeout
(ada3:siisch3:0:0:0): Retrying command
(ada3:siisch3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 a6 ed ef 40 00 00 00 01 00 00
(ada3:siisch3:0:0:0): CAM status: Command timeout
(ada3:siisch3:0:0:0): Retrying command
(ada3:siisch3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 40 a6 f0 ef 40 00 00 00 00 00 00
(ada3:siisch3:0:0:0): CAM status: Command timeout
(ada3:siisch3:0:0:0): Retrying command
(ada3:siisch3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 01 22 76 46 40 00 00 00 00 00 00
(ada3:siisch3:0:0:0): CAM status: Command timeout
(ada3:siisch3:0:0:0): Retrying command
(ada3:siisch3:0:0:0): lost device
(pass3:siisch3:0:0:0): passdevgonecb: devfs entry is gone
siisch2: device reset stuck (timeout 100ms) status = ffffffff
(ada2:siisch2:0:0:0): READ_FPDMA_QUEUED. ACB: 60 01 4d 6b 30 40 22 00 00 00 00 00
(ada2:siisch2:0:0:0): CAM status: Command timeout
(ada2:siisch2:0:0:0): Retrying command
(ada2:siisch2:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 66 f3 ef 40 00 00 00 01 00 00
(ada2:siisch2:0:0:0): CAM status: Command timeout
(ada2:siisch2:0:0:0): Retrying command
(ada2:siisch2:0:0:0): READ_FPDMA_QUEUED. ACB: 60 80 e6 f4 ef 40 00 00 00 00 00 00
(ada2:siisch2:0:0:0): CAM status: Command timeout
(ada2:siisch2:0:0:0): Retrying command
(ada2:siisch2:0:0:0): READ_FPDMA_QUEUED. ACB: 60 80 66 f4 ef 40 00 00 00 00 00 00
(ada2:siisch2:0:0:0): CAM status: Command timeout
(ada2:siisch2:0:0:0): Retrying command
(ada2:siisch2:0:0:0): READ_FPDMA_QUEUED. ACB: 60 01 22 76 46 40 00 00 00 00 00 00
(ada2:siisch2:0:0:0): CAM status: Command timeout
(ada2:siisch2:0:0:0): Retrying command
(ada2:siisch2:0:0:0): lost device
(pass2:siisch2:0:0:0): passdevgonecb: devfs entry is gone
(ada3:siisch3:0:0:0): removing device entry
(ada2:siisch2:0:0:0): removing device entry

Yea that doesn't look good :)

roadhead fucked around with this message at 23:44 on Jan 7, 2013

  • Locked thread