Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Regarding RAID-Z, it's different to fix some issues with RAID-5 (write hole). Sun can't go around claiming to have a filesystem that stays consistent after crashes and then go use RAID-5 with static stripe width loving up the data.

As far as RAID-1 read performance goes, given a good IO scheduler (which negates any need to "read in sync"), you can get near double read speeds.

I run two WD Raid Editions (WD5000ABYS) in a RAID-1. I can get up to 70MB/s off a single drive. In mirror configuration, up to 130MB/s is in. The IO scheduler involved here is ZFS' IO pipeline. Both measurements are taken by dd'ing a huge file on the filesystem to /dev/null, using the filesystem's record size as block size.

What should be taken in mind with these numbers is that ZFS is a COW system with load balancing and what not. Anyone with a defragmentation fetish would weep blood.

Shalrath posted:

On a similar note, I believe the inode table (or whatever NTFS uses) has gone bad on my laptop's windows partition.
It's called Master File Table (MFT) and it's mirrored on the drive. Something else may have broken, if chkdsk can't fix the MFT.

stephenm00 posted:

why isn't zfs and raid-z a more common option? Their must be some disadvantage for home users right?
The disadvantage is that it can have quite a memory footprint. Actually, that's not entirely correct. It's just that a lot of memory to it can make it fly even more. The IO pipeline of ZFS takes huge advantage of a huge ARC (adaptive replacement cache, is what ZFS uses), because it can detect various read patterns and prefetches accordingly into the ARC.

The ARC cache resizes with memory pressure. At least it does in Solaris, not sure if that works already in FreeBSD or if it's still a fixed setting (I think it was 64MB). Anyway, you can manually set a limit, which would be stupid, but people get too impressed with code making gratuitous use of free unused memory (See the Vista Superfetch bullshitting).

Idiotic anecdotal reference: When I was new to Solaris and running ZFS, watching a movie from harddisk in background, I was wondering why the drive LED wasn't going at all and why I was having occasional sound skipping (lovely driver caving under load, is fixed now). At some point diagnosing, I ended up checking the IO stats in ZFS, turned out that it figured out I was doing linear reads and actually reading 180-200MB at once every 15 minutes.

Combat Pretzel fucked around with this message at 16:38 on Mar 19, 2008

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Munkeymon posted:

More recently, I installed some RAM that's either bad or the motherboard hates and Solaris crashed and corrupted a file* that prevents it from booting at all, so I figured I'd try BSD and see if it worked. Unfortunately, it's refusing to recognise zpool (which I admit I was not able to properly export). Back to solaris now, hope the newest version of the developer edition is more stable. Also I hope it will import the array because if not I'm really gonna :cry:
FreeBSD only supports pool version 2. That and apparently GEOM's interfering. If it ain't either of these, you can supply the -f flag to zpool import. Exporting the pool is just a management semantic.

As far as Solaris not booting, if GRUB and failsafe mode still work, the boot archive is hosed. Ain't a biggie, since you can recreate (i.e. update) it in failsafe mode.

Munkeymon posted:

You should know that if you plan on running ZFS and using Samba, you might have problems. My server, running Solaris, had stability issues for months and then I stopped listening to music stored on the network.
Might consider looking into the most recent Nevada builds. It comes now with a CIFS server written by Sun based on the actual Microsoft documentation. If you've another month time, you should wait for the next Developer Edition based on snv_87, which apparently comes with an updated version of the Caiman installer that supports installing to ZFS and setting up ZFS boot (single disk or mirror pool only). I'd figure that boot files on ZFS is more resilient to random crashes loving up your boot environment.

Combat Pretzel fucked around with this message at 01:10 on Mar 22, 2008

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Munkeymon posted:

That doesn't make sense based on what I found on their wiki: http://wiki.freebsd.org/ZFSQuickStartGuide they support features from at least version 8, though I can see that the features don't exactly stack, and so could be skipped for lower-hanging fruit. Besides, the -f flag doesn't do anything when the system swears there are no pools or if it simply can't start ZFS in the first place.
What I see is up to version 5, which is gzip. The features above are not really that important (yet), but I figure that zpool throws a fit if the version's higher. Actually, I don't even know how it'd behave on a higher pool version than what's supported. Silence might just be it, perhaps.

quote:

I thought I read somewhere that export wrote some extra metadata, but I could easily be wrong since all my research is a year old at this point.
Export only sets a flag in the pool that it's unused and removes its entry from zpool.cache.

quote:

I'd rather have the newer system going and the only things I care about are the the pool and the Azureus install, which is only valuble because it required a retarded ammount of effort to get working.
I don't get what you mean. You've already set up FreeBSD? Fixing the boot archive is one single line. Actually, the more recent Nevada builds should notice it themselves when booting to failsafe and ask you if it should be updated.

bootadm update-archive -R /a

(Since in failsafe mode, it mounts your root fs to /a)

quote:

I'd much rather get an AMD64 build running, but that apparently means conjuring nforce drivers out of thin air, which I'm not up for. Maybe I will just get a minimal effort system running and ride it out untill the next version if it's that close :\ I miss the warm, fuzzy feeling of having nightly automated backups.
Uhm. I still run snv_76, half a year old, and is pretty stable on my NForce4 mainboard. And it boots to 64bit mode. They ship drivers for the Nvidia SATA controller since I think snv_72.

servo@bigmclargehuge:~ > modinfo | grep nv_sata
38 fffffffff7842000 5b88 189 1 nv_sata (Nvidia ck804/mcp55 HBA v1.1)

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Munkeymon posted:

Oh sorry, I read here: http://wiki.freebsd.org/ZFS that they have delegated admin going, which is a version 8 feature. The pool it failed to notice was at version 3, so I don't think it was a version problem.
Must be that GEOM thing. I think ZFS in FreeBSD can't handle full disk vdevs properly, since Solaris partitions them as GPT disk when you slap ZFS all across it and GEOM kind of trips over itself with these. At least I think that was a limitation some time ago, that you had to set up a full disk BSD slice first and make it use that.

quote:

Yeah, I installed FreeBSD on a spare drive that I swapped in and ZFS didn't work. I got an error message about it being unable to initialize the ZFS system and I couldn't find anything helpfull on Google, so I installed the latest Solaris Express Developer over it.
The Developer Editions are actually pretty stable. The development process of Solaris Nevada is pretty cool (and I guess similar to FreeBSD). Everything has to be tested and then approved and then tested before it can go into the main tree that'll become the Community and Developer editions. As said, I'm running a Nevada build and it's mighty stable.

quote:

I did try updating my old install (~1 year old now), but the installer said there wasn't enough space on the drive. I don't see why because that drive has 62 GB free on slice 7, though I may be misunderstanding the update procedure.
Slice 7 is /export with the standard layout and / with all the rest is slice 0. The loving dumb thing with the current Solaris installer is that it sized said slice more or less close to what it needs to install the system. If you try a regular update to the same slice, you'll be out of luck.

If you still want to run Solaris and get rid of these silly hassles, wait a month for the snv_87 Community Edition, following the next upcoming Developer Edition. The new graphical installer (which you access under the Developer Edition boot option :psyduck: ) will support ZFS root and boot. Like this, you don't have to deal with mis-sized slices anymore on upgrades. Snap Upgrade will also be integrated, which is like Live Upgrade but for ZFS and taking advantage of it.

(ZFS boot works currently only on single disk or single mirror pools, so you need a seperate pool on your system disk.)

I'm also waiting for that build. If you intend to use GUI stuff on your server (locally or XDMCP), snv_88 will have Gnome 2.22 integrated, you don't want that because it appears like the new GVFS stuff makes it crash happy.

quote:

Did you have to do any specail configuration for that or did everything work right from the get-go?
Nope. Pre snv_72, the SATA drives acted like regular IDE drives towards the system. I thought that was normal behaviour from the Nvidia SATA chipset. With snv_72, the Nvidia SATA driver was released and the disks act like SCSI disks now (which they're apparently supposed to).

quote:

On a side note, I can't believe you use > in your prompt. I'd constantly be checking to be sure I wasn't redirecting output into an executable file :psyduck:
It's colored, either green for normal user or red for root (additional indicator). :eng101:

Actually, I could remove the drat user name, because there's just me and root.

Combat Pretzel fucked around with this message at 00:16 on Mar 23, 2008

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Munkeymon posted:

I had assumed it would just move or copy everything on slice 0 to someplace on 7 and then merge new files in, but I guess the installer isn't that smart.
I wouldn't know, it never worked at all for me, and I have 16GB root slices. Live Upgrade is also stupid in my general direction, so I'm hoping for Snap Upgrade to work well.

quote:

I don't really want Gnome at all because I prefer KDE :) I do use the GUI, though, for the torrent client.
Sun's own distro will be Gnome based, due to their investments in it. There's at least one Sun engineer working with the KDE team to port KDE4 to Solaris. I guess once it's workable, and Project Indiana (the prototype OpenSolaris distro incl. distro constructor) reaches beta and/or release stage, it'll be available as option there. Not sure how it'll be handled with the SX*E's.

quote:

Also, I don't think putting the system root in the pool isn't really something I care to do.
What I was saying is that, if you were to use a Solaris build with the new revision of the new installer, that you should create a seperate pool on the seperate disk you ran your system on. It'll not be redundant, but you get the advantages of pooled storage making the fixed slices crap go away, and Snap Upgrade, that'll employ snapshots and clones magic to update your system during regular operation (and make it available on reboot). The pool would be seperate from your data pool.

Munkeymon posted:

Will the community edition ever come out in 64-bit, do you think? You seem way more knowledgeable the Solaris community than I am. Oh, and what about the blurb on the download page that says it's 'unsupported'? Is that Sun speak for 'you're pretty much on your own'?
On boot, it's decided whether it loads the 32bit or 64bit kernel. The userland is mainly 32bit, but ships 64bit versions of most libraries. Components like Xorg are available in both versions, which version's loaded is decided with isaexec magic.

Right now, it's the same argument as with Windows. There's no real point in a 64bit userland, except in places where it makes sense for the last bit of performance, i.e. kernel and Xorg.

And unsupported in the sense that they won't be liable if you run NASDAQ on it and then whoopsy-daisy, your data goes missing.

quote:

Well on my first try with a 64-bit build the system didn't seem to be able to see them at all, IDE or no.
That's strange. I figured that with the old ATA chips, the situation is similar to SATA's AHCI, that there's a generic way to operate them.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

TheChipmunk posted:

Is OpenSolaris a respectable option for ZFS and homebrew NAS boxes? (By NAS I mean old computer
You do know that ZFS originates out of (Open)Solaris?

--edit: What I'm saying is that if you want best stability for ZFS, you should install the turf it's born on. To check if all your hardware's supported, download the Project Indiana preview from opensolaris.org. It's an installable LiveCD. There's an OpenSolaris distro called NexentaStor specialized for NAS crap, comes with a Web UI.

Combat Pretzel fucked around with this message at 14:51 on Mar 23, 2008

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Works just fine as desktop OS here. On the surface, it's just the same as Linux. GNOME, Compiz, Firefox, Thunderbird, etc blahblah. Memory footprint isn't too different either. You just don't have nice and cosy package management a la Ubuntu. That's coming with Project Indiana, give it at least another 6 months.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

stephenm00 posted:

To implement ZFS on a nas, wouldn't that require a lot of ram and cpu?
CPU isn't an issue, unless you've a disk subsystem that can shove god knows how many MB/s. The default checksumming algoritm is pretty fast. If you resort to SHA256 hashes exclusively, there might be a problem, but still would require a decent throughput to turn the CPU into the bottleneck.

RAM isn't either, though the more RAM, the more disk cache and subsequently performance you get out of it. The ZFS cache does more than being a simple LRU caching. ZFS actively prefetches large chunks of data if it sees reason to (i.e. streaming applications or databases).

quote:

also could someone explain "you can't currently expand a VDEV's"
Every device, that includes mirror and RAID-Z pseudo-devices, is a virtual device. A RAID-Z array is a vdev, too. Due to the nature of how RAID-Z works, you can't just add a drive to it and quickly rearrange parity. Well, you could, but the process would be a huge undertaking and not really a sane idea either in the environment ZFS is targetting at (enterprise storage).

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Single disk expansions aren't possible, you'll have to add another array to the pool to increase it.

As for expansion getting implemented, it isn't as easy as with RAID-5, where you have to move mostly two stripes per row. Each ZFS file system block is spanned across all devices, because of that, stripe sizes are variable. Adding another drive, ZFS would have to comb through the whole data set and restripe ALL data. Quite dangerous.

Last this was brought up in the mailing list, there's stuff being added that would enable such a scenario. Whether it'll be implemented, remains to be seen.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
If you want ZFS, use Solaris if you want stability. It's as unwieldy as FreeBSD. To check whether your hardware is supported or not, download the OpenSolaris Developer Preview, which is a live CD.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
ZFS takes anything that has character device like semantics. Including files.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

poo poo Copter posted:

Thanks. Upgrading in the future is one concern of mine though. Would I absolutely need another 5 disks in order to upgrade/expand, or could I add say 3 to another pool? How exactly does the expansion process go? Would I need to have two seperate arrays, or could I add the other disks to the same array - but on a seperate pool?
To expand a pool, you throw additional vdevs into it. How the vdevs are made up isn't important. They can be files, single disks, RAID-Z arrays or mirrors (latter two are considered single vdevs). Two RAID_Z's in a pool don't need to match in size or amount of disks, either. Not does the type of vdevs need to. You can mix mirrors with RAID-Z's in a pool. If there are multiple vdevs (e.g. two RAID-Z arrays), ZFS spreads the writes across them, influenced by metrics like available write bandwidth and available free space.

poo poo Copter posted:

Also does anyone know how Solaris is with VMWare server? I'm interested in virtualizing a Windows dev environment on this file server as well. I know it's braineddead easy to setup in CentOS or Debian, but I haven't been able to find any specific info for Solaris.
To use Solaris as virtualization host, your option are either using a Nevada build (Solaris Express any recent edition, OpenSolaris 2008.05) as Dom0 on Xen, or use VirtualBox 1.6. I figure VirtualBox would be the better option for you. It also comes with guest drivers for Windows, speeding things up quite a bit, plus seamless mode to merge the Windows desktop into your Solaris desktop. There's no VMware for Solaris (yet?)

Combat Pretzel fucked around with this message at 19:50 on May 13, 2008

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

poo poo Copter posted:

So am I correct in assuming that if I have multiple vdevs in the same pool, that they could be accessible as a single large volume?
Yep, that's the idea. The vdevs are kind of concatenated (actually striped dynamically) and create the storage pool where ZFS filesystems draw from. These ZFS filesystems are really lightweight and more of an abstraction, mostly to specify different data policies (like different checksum algorithm, compression, NFS sharing, record size, etc.), so don't be afraid to use the zfs create command. ZFS filesystems aren't sized, they only draw from the pool what they need and give back what's freed.

poo poo Copter posted:

Hmm, it sounds like I will have to give VirtualBox a shot.
Works well enough here to run Windows Server 2003 stable.

If you've time to fiddle, you should consider using a ZFS ZVOL, though. More efficient that a regular file on ZFS as VM container. Goes something like this, creating a 10GB thin provisioned virtual disk in the ZFS pool (as superuser or using pfexec if your account has the root role):

zfs create -s -V 10G pool/windows
ls -l /dev/zvol/rdsk/pool/windows (results probably in something like used in the next line)
chown youraccount /devices/pseudo/zfs0@1:a,raw (what the ls -l gave you)

You need to change the ownership on the symlink target. Adjust permissions as needed, if necessary. Then as regular user:

VBoxManage internalcommands createrawvmdk -filename foo.vmdk -rawdisk /dev/zvol/pool/windows

The ZVOL should then show up as available disk in the GUI.

Combat Pretzel fucked around with this message at 20:13 on May 13, 2008

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

napking posted:

i installed 2008.05 on top of the b81 and it's been great so far the past few days. zfs root is awesome!
Apparently they're going to add the b89 packages somewhen in the rest of this week (2008.05 is b86, the b89 packages include Gnome 2.22). Be prepared to witness the awesomeness that's pkg image-update and boot environments. Gotta love having pkg use ZFS to create a clone of the current system and update the clone, so it doesn't interfere at all with you and your work.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

napking posted:

do you know if the current version of zfs lets me change a mirrored two disk pool into a raidz pool by adding a third disk? i'd really like to expand this pool without going through hoops.
ZFS doesn't do single disk expansions. It's targetting the enterprise, where whole arrays are added (doing single disk expansions gets you laughed out of the IT office). Your only option is to back up your stuff, kill the pool and create the RAID-Z vdev from the devices.

Adding the ability to expand existing RAID-Z arrays is slowly being considered. At least the logistics of this are already being discussed.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I remember there being a command to do this, because I've read about installing CIFS on the OpenSolaris forums and executing it, but I fail at digging it up again.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
ZFS is pooled storage, so that's normal.

You should know though that there's currently no vdev remove functionality yet. It's still in work. The way ZFS works in regards to geometry and striping makes it a little more complex than other pooled storage filesystems.

So if you want to switch out a vdev, you have to put the new one in, replace the old one with the new one and then only can you remove the old one. The new vdev has to be same size or larger than the old one.

Delta-Wye posted:

Could I just swap the 250s out one by one and let it rebuild? Would I expect to see 640G or 960G?
After resilvering, ZFS accounts for the smallest disk in the vdev. A mirror made from a 250GB and 500GB disk results in a 250GB mirror. If you replace the 250GB disk with a 500GB, you finally get the other unused 250GB.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Nam Taf posted:

So I am right in assuming that if I have, say, 4x750GB drives, I get 3x750GB space. I then replace them 1 by 1 with 1TB drives, letting the array rebuild itself each time. After I replace the 4th, I should automagically see 3TB of space appear, yes?
Yes.

Delta-Wye posted:

:words:
That's sure some drive fuckery. Remember that any drive you swap has to have the same amount of sectors or more than the smallest drive in the RAID-Z. You can always go up, but not down. Once you swapped the 75GB drive with a 250GB one, you can't do it again the other way.

I say same amount of sectors. If you were to replace a 250GB drive with another one, which is however a bunch of sectors smaller due to manufacturer geometry differences, ZFS will tell you to go screw yourself.

You have to manually replace the drive. If you remove it before inserting the new drive, it'll report the drive missing. It's zpool replace pool olddev newdev.

PS, you can't boot from RAID-Z (yet). You'll be needing a small boot drive.

Delta-Wye posted:

Now if only I knew the mobo I want to use supported freebsd - something tells me it probably doesn't. :(
Buy the mainboard and try. It's very unlikely that it won't be supported. If at all, you may have to resort buying a dedicated network card because the onboard junk's not (properly) supported. Anything else uses standard interfaces, OHCI and EHCI for USB, AHCI for SATA, and so on.

Also, try going with Solaris first, before trying FreeBSD. If you're serious about using a ZFS fileserver, you should go with the native environment, if you value stability. It isn't a big scary beast.

The OpenSolaris image, which coincidentally has a very nice and friendly installer on its LiveCD, will be updated shortly with new bits. It's a 700MB live CD, that also has a device detection tool, so you can see if everything's supported before installing.

I'm running Solaris on a X48 mainboard, with NVidia card, bunch of drives, a Xonar DX, a PCIe Intel NIC and all's well supported and hardware accelerated where available.

Combat Pretzel fucked around with this message at 15:44 on Jul 2, 2008

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Delta-Wye posted:

Thanks for the info. I've used Linux extensively, and OpenBSD and FreeBSD less so (although I like them a bit better) but the few minutes I sat in front of Solaris made me want to kill myself. Perhaps I ought to suck it up and try again!
You've got to be making GBS threads me. Even the normal Solaris 10 package comes with an installer that's not really harder than the FreeBSD one, and it even comes with Gnome.

Try the OpenSolaris 2008.05 package. If that's still bothering you, then I don't know.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Delta-Wye posted:

For what it's worth, the last time I ran Solaris was... 2001? 2002? Long before OpenSolaris, I'm pretty sure it only ran on Sparc systems then. And it wasn't installing it, it was just using it that I found distasteful. It just bugged me for whatever reason.
Can't help it that you had to use CDE, but these days, it comes with Gnome.

OpenSolaris comes as live CD with an installer similar to Ubuntu. If that and Gnome would be bugging you, I don't know.

And if you want a stable and fast ZFS server, you better go with Solaris. It's the native runtime environment. Especially because they've also a kernel CIFS implementation based on actual Microsoft documentation, integrated with ZFS, not available on FreeBSD. All in all quite a little bit much faster than this Samba poo poo and worth it if the clients are Windows.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

vanjalolz posted:

solaris is weird, i'm trying it in a vm
In what ways would it be weird? It sure can't be the user interface, since it's the same in Linux and BSD, since it's Gnome.

If you're by chance using Solaris 10 (and maybe using CDE), ditch it and go with the Nevada builds. They're pretty stable. Even better, go with OpenSolaris (a Nevada distro), which supports installation of ZFS boot (please get build 93, to avoid boot breakage, introduced between build 86 and build 89, when updating the whole system).

Right now, if you're serious about longterm ZFS usage, Solaris is the way to go, until at least that Pawel guy removes the experimental status of the port. He has reasons why it's still marked as such.

vanjalolz posted:

ZFS boot
Where does that come from?

First of all, ZFS boot is still WIP. The only thing really supported is booting from a single disk pool. Single mirror works, too. RAID-Z and multivdev doesn't work yet, since ZFS doesn't yet know how to keep a complete boot archive on every drive in the pool and make it discoverable (remember, RAID-Z and multivdev are both striped in various ways). ZFS boot phase 1 has only been committed with build 88, that's like 2.5 months ago. Before, it was more of a hack on experimental support.

Second, if you intend to create a file server, you should maintain your system on a seperate drive anyway, so exactly do what the troubleshooting says, though I haven't found it necessary yet.

Lastly, don't be scared by these so-called memory requirements of ZFS thrown around. These are recommendations for enterprise use. Except that every rear end in a top hat keeps referring to them even to home users.

Going with OpenSolaris gets you Boot Environments, which uses ZFS snapshots and clones to play kind of Time Machine and Pit Stop for the operating system. Means you can create clones of your current system at will, boot to each one as you please, update each one seperately and from within other BEs, etc blah blah.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Utility placement in Solaris is actually the true Unix way, and I guess just a matter of getting used to. BSD is also different from Linux on command line. If you however install the OpenSolaris distro (from opensolaris.com, not .org), you get the GNU tools as default.

Anyway, you're currently using the Solaris Express Community Edition. That's fine, you just don't get the OpenSolaris tools like IPS and Boot Environments. Latter allows for way easier upgrading and experimenting. OpenSolaris is also a live CD, so you don't need to install for basic tests. OpenSolaris does ZFS root only, no UFS. And it has an Ubuntu style installer, not the silly CDE one.

As said, the memory requirements depend on what you want to do. For a home file server, even a gigabyte is enough. ZFS makes gratuitous use of memory for functions like cache and especially prefetching (it detects linear and stride reads and depending on the pattern prefetches very large chunks (think 100-200MB)). But it can also run in low memory situations. The cache reacts to memory pressure and shrinks if necessary.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I'm trying to get unRAID. You say polarity (parity?) disk, I suppose it's just simply a RAID-4 with a little awareness of the disk geometries?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
ZFS will write GUIDs onto the drives. If you pull it, the array will drop to degraded state and I think still allow writes. If you put the pulled drive back, it'll recognize it via the GUID and update the drive (the versioned metadata tree approach speeds that up, since it can figure out what changed while the drive was offline). If you add a new drive, you need to manually replace the missing drive by it, using the zpool command. The clean drive resilvering will take a while, since the new drive would be clean.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Farmer Crack-rear end posted:

Why is two small power supplies better than one larger one?
Optimum power conversion efficiency, I'd figure. Unless you size the bigger one down to the exact power needs.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
RAID-Z is equivalent to RAID-5, RAID-Z2 to RAID-6. The number just indicates how many parity stripes per row. Actually, no one says RAID-Z1.

As far as onboard goes, what chip? Xorg in it comes with drivers for all sort of Intel chips. It also comes with the opensource ATI driver, but I'm not sure if it's on the initial LiveCD or in one of the updates.

If you chose to pkg image-update to get the driver or just get the newest bits, be sure to head over to opensolaris.org and go to the Indiana forum, because there's some manual work required (scoll down to IMPORTANT in the OP) due to ZFS boot changes. Each pkg image-update creates a new boot environment, if the update doesn't please you, you can boot back to the pre-update boot environment and pretend you never updated.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
It supports C- and P-states on Intel, latter is frequency scaling. Activating it seems a mystery though. PowerTOP once suggested it, I've enabled it and it worked. But only for that session. Not sure how it works manually and PowerTOP never suggested it again.

Then again, my Core 2 Quad only had C0 (running) and C1 (simple halt), as well only two P-states, i.e. 2.67 GHz (full speed) and 2.0 GHz. Whether that's coming from the CPU or Solaris' power management support, I don't know. But I think it'd be safer to go Intel, since they've Intel developers contributing code for power management and scheduler stuff.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Hanging around on the OpenSolaris mailing lists a lot. --ninja edit: Where a lot of their developers also post.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Insane2986 posted:

Will RAID 5/6 "wake up" my drives if they are inactive? (I have Vista set to turn drives off after 10 minutes of inactivity to cut down on heat)
What do you mean by that?

If you're hoping that the parity drives would spin down if there's no write activity, you'll be out of luck. Parity is spread across drives. The parity stripe resides on a different drive each row. For that matter, access time updates happen on reads too, creating writes.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

vlack posted:

Is there a network-aware package manager yet? Does it have third-party packages? Does it replace smpatch and distribute patches to Sun-supported software? Can I use it to upgrade between releases of the operating system like Ubuntu? Does it replace the BFU patching that people did with SXDE?
Comes with ipkg, similar to apt-get and cohorts, except it integrates ZFS snapshots for rolling back failed updates and boot environments (a full system update will create a new boot environment based on a snapshot of your current one). Personally, I've like 10 boot environments right now, each representing the system state right before issuing pkg image-update (which updates your system will all the latest repo bits). Been lazy deleting them.

The repo is still kind of empty. All virtually most standard stuff you got with SXCE, minus a few encumbered bits (mostly licensed drivers), are available on it. They're slowly adding stuff to it. I think sanctioned builds of WINE, Transmission, XChat, Songbird and god knows what else are queued for snv_99/100.

Well, as far as patching goes, the repo is currently moving and updates biweekly SXCE style.

quote:

It seems very desktop-focused. Can you install it headless? My fileserver's BIOS supports serial console redirection, which I would prefer.
Disable GDM?

quote:

It seems like Murdock wants to use the GNU tools in Indiana. Are the Sun tools still available somewhere or are they completely gone?
"GNU tools" in Indiana is just /usr/gnu/bin heading the rest of the PATH variable. Remove it and you have all SVR4 tools again.

Combat Pretzel fucked around with this message at 19:32 on Sep 11, 2008

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
A two disk RAID5 is a mirror. A XOR Nothing = A.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

sund posted:

Thanks. I wanted to end up with a four disk RAID 5 setup wanted to start with the cheapest setup I could. I realize it looks like a crazy question because I always assumed parity was distributed across the disks, not on a dedicated drive.
Parity is indeed striped. But since the parity will end up being a copy of the data, it's effectively a mirror. Minus the system being aware of it and not getting read speed ups. Unless the code actually handles that specific case differently.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
TLER is 7 seconds. Consumer level error recovery has generally time outs around 2 minutes. Which is a bitch, because it will have any hardware and software RAID stack declare your single-badly-broken-sector drive as dead if it comes across it.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

vanjalolz posted:

.. but now I have like a hundred snapshots and they're starting to take up space. What;s the best way to prune this stuff?
Scripting it. There's no other way.

Reading the LSARC case files, it appears that at latest with OpenSolaris 2008.11, there'll be an UI and SMF service that takes care about periodic snapshotting and automatic pruning.

Meanwhile, there's this, which will apparently be the basis for said stuff above:
http://blogs.sun.com/timf/en_IE/entry/zfs_automatic_snapshots_0_11

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

pipingfiend posted:

I would not run this stuff to download directly to the array as torrents will probably slowly kill it.
I'm trying to reply in the way I think I understand that phrase.

It's a COW filesystem. Things that get random writes will end up fragmented like poo poo by default. The IO scheduler and prefetcher will compensate for that. Also, ZFS groups all writes into transactions every five seconds or the cache being filled totally with writes.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The builds of Solaris Express and OpenSolaris are exactly the same, apart from a different installer and latter not shipping with third party licensed bits (nothing you'd miss).

For the shiny things, you need OpenSolaris or SXCE. Nexenta is also tracking the latest builds with just a little lag AFAIK, but I've never used it.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

amerrykan posted:

When I issue 'zfs sharesmb=on mypool/storage', I receive "cannot share 'mypool/storage': smb add share failed".
Try prefixing your command with pfexec.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

wolrah posted:

I really want to use ZFS, but I have a somewhat irrational dislike of Solaris thanks to some old-rear end SPARC boxes I had to use in college.
Your dislike lies probably more with CDE than Solaris. Latter comes with Gnome enabled by default now.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Triikan posted:

How migrateable are RAID-5s and spanned arrays? I have a hardware 4-Port RAID5 card, but it only allows for 2TB logical drives, so under XP Pro I spanned the two logical drives it created to form one 4TB, spanned array (this basically just writes to the first logical drive, then the second once the first is filled, correct? So it's not a software raid?).
NTFS sees the spanned volume as one drive and deals with it as such. In the most undesirable case, the MFT's already split across the end of the first and beginning of the second drive (IIRC NTFS places it in the middle to reduce seeking distances). A drive failing in a spanned volume just means drama.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Triikan posted:

Will this cause me any problems? It's still RAID5'ed.
Oh, missed that. No idea, depends on how the spanned volume code reacts to a degraded array.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply