|
Regarding RAID-Z, it's different to fix some issues with RAID-5 (write hole). Sun can't go around claiming to have a filesystem that stays consistent after crashes and then go use RAID-5 with static stripe width loving up the data. As far as RAID-1 read performance goes, given a good IO scheduler (which negates any need to "read in sync"), you can get near double read speeds. I run two WD Raid Editions (WD5000ABYS) in a RAID-1. I can get up to 70MB/s off a single drive. In mirror configuration, up to 130MB/s is in. The IO scheduler involved here is ZFS' IO pipeline. Both measurements are taken by dd'ing a huge file on the filesystem to /dev/null, using the filesystem's record size as block size. What should be taken in mind with these numbers is that ZFS is a COW system with load balancing and what not. Anyone with a defragmentation fetish would weep blood. Shalrath posted:On a similar note, I believe the inode table (or whatever NTFS uses) has gone bad on my laptop's windows partition. stephenm00 posted:why isn't zfs and raid-z a more common option? Their must be some disadvantage for home users right? The ARC cache resizes with memory pressure. At least it does in Solaris, not sure if that works already in FreeBSD or if it's still a fixed setting (I think it was 64MB). Anyway, you can manually set a limit, which would be stupid, but people get too impressed with code making gratuitous use of free unused memory (See the Vista Superfetch bullshitting). Idiotic anecdotal reference: When I was new to Solaris and running ZFS, watching a movie from harddisk in background, I was wondering why the drive LED wasn't going at all and why I was having occasional sound skipping (lovely driver caving under load, is fixed now). At some point diagnosing, I ended up checking the IO stats in ZFS, turned out that it figured out I was doing linear reads and actually reading 180-200MB at once every 15 minutes. Combat Pretzel fucked around with this message at 16:38 on Mar 19, 2008 |
# ¿ Mar 19, 2008 16:24 |
|
|
# ¿ Apr 28, 2024 06:51 |
|
Munkeymon posted:More recently, I installed some RAM that's either bad or the motherboard hates and Solaris crashed and corrupted a file* that prevents it from booting at all, so I figured I'd try BSD and see if it worked. Unfortunately, it's refusing to recognise zpool (which I admit I was not able to properly export). Back to solaris now, hope the newest version of the developer edition is more stable. Also I hope it will import the array because if not I'm really gonna As far as Solaris not booting, if GRUB and failsafe mode still work, the boot archive is hosed. Ain't a biggie, since you can recreate (i.e. update) it in failsafe mode. Munkeymon posted:You should know that if you plan on running ZFS and using Samba, you might have problems. My server, running Solaris, had stability issues for months and then I stopped listening to music stored on the network. Combat Pretzel fucked around with this message at 01:10 on Mar 22, 2008 |
# ¿ Mar 22, 2008 01:04 |
|
Munkeymon posted:That doesn't make sense based on what I found on their wiki: http://wiki.freebsd.org/ZFSQuickStartGuide they support features from at least version 8, though I can see that the features don't exactly stack, and so could be skipped for lower-hanging fruit. Besides, the -f flag doesn't do anything when the system swears there are no pools or if it simply can't start ZFS in the first place. quote:I thought I read somewhere that export wrote some extra metadata, but I could easily be wrong since all my research is a year old at this point. quote:I'd rather have the newer system going and the only things I care about are the the pool and the Azureus install, which is only valuble because it required a retarded ammount of effort to get working. bootadm update-archive -R /a (Since in failsafe mode, it mounts your root fs to /a) quote:I'd much rather get an AMD64 build running, but that apparently means conjuring nforce drivers out of thin air, which I'm not up for. Maybe I will just get a minimal effort system running and ride it out untill the next version if it's that close :\ I miss the warm, fuzzy feeling of having nightly automated backups. servo@bigmclargehuge:~ > modinfo | grep nv_sata 38 fffffffff7842000 5b88 189 1 nv_sata (Nvidia ck804/mcp55 HBA v1.1)
|
# ¿ Mar 22, 2008 13:35 |
|
Munkeymon posted:Oh sorry, I read here: http://wiki.freebsd.org/ZFS that they have delegated admin going, which is a version 8 feature. The pool it failed to notice was at version 3, so I don't think it was a version problem. quote:Yeah, I installed FreeBSD on a spare drive that I swapped in and ZFS didn't work. I got an error message about it being unable to initialize the ZFS system and I couldn't find anything helpfull on Google, so I installed the latest Solaris Express Developer over it. quote:I did try updating my old install (~1 year old now), but the installer said there wasn't enough space on the drive. I don't see why because that drive has 62 GB free on slice 7, though I may be misunderstanding the update procedure. If you still want to run Solaris and get rid of these silly hassles, wait a month for the snv_87 Community Edition, following the next upcoming Developer Edition. The new graphical installer (which you access under the Developer Edition boot option ) will support ZFS root and boot. Like this, you don't have to deal with mis-sized slices anymore on upgrades. Snap Upgrade will also be integrated, which is like Live Upgrade but for ZFS and taking advantage of it. (ZFS boot works currently only on single disk or single mirror pools, so you need a seperate pool on your system disk.) I'm also waiting for that build. If you intend to use GUI stuff on your server (locally or XDMCP), snv_88 will have Gnome 2.22 integrated, you don't want that because it appears like the new GVFS stuff makes it crash happy. quote:Did you have to do any specail configuration for that or did everything work right from the get-go? quote:On a side note, I can't believe you use > in your prompt. I'd constantly be checking to be sure I wasn't redirecting output into an executable file Actually, I could remove the drat user name, because there's just me and root. Combat Pretzel fucked around with this message at 00:16 on Mar 23, 2008 |
# ¿ Mar 23, 2008 00:14 |
|
Munkeymon posted:I had assumed it would just move or copy everything on slice 0 to someplace on 7 and then merge new files in, but I guess the installer isn't that smart. quote:I don't really want Gnome at all because I prefer KDE I do use the GUI, though, for the torrent client. quote:Also, I don't think putting the system root in the pool isn't really something I care to do. Munkeymon posted:Will the community edition ever come out in 64-bit, do you think? You seem way more knowledgeable the Solaris community than I am. Oh, and what about the blurb on the download page that says it's 'unsupported'? Is that Sun speak for 'you're pretty much on your own'? Right now, it's the same argument as with Windows. There's no real point in a 64bit userland, except in places where it makes sense for the last bit of performance, i.e. kernel and Xorg. And unsupported in the sense that they won't be liable if you run NASDAQ on it and then whoopsy-daisy, your data goes missing. quote:Well on my first try with a 64-bit build the system didn't seem to be able to see them at all, IDE or no.
|
# ¿ Mar 23, 2008 03:39 |
|
TheChipmunk posted:Is OpenSolaris a respectable option for ZFS and homebrew NAS boxes? (By NAS I mean old computer --edit: What I'm saying is that if you want best stability for ZFS, you should install the turf it's born on. To check if all your hardware's supported, download the Project Indiana preview from opensolaris.org. It's an installable LiveCD. There's an OpenSolaris distro called NexentaStor specialized for NAS crap, comes with a Web UI. Combat Pretzel fucked around with this message at 14:51 on Mar 23, 2008 |
# ¿ Mar 23, 2008 14:38 |
|
Works just fine as desktop OS here. On the surface, it's just the same as Linux. GNOME, Compiz, Firefox, Thunderbird, etc blahblah. Memory footprint isn't too different either. You just don't have nice and cosy package management a la Ubuntu. That's coming with Project Indiana, give it at least another 6 months.
|
# ¿ Mar 23, 2008 22:00 |
|
stephenm00 posted:To implement ZFS on a nas, wouldn't that require a lot of ram and cpu? RAM isn't either, though the more RAM, the more disk cache and subsequently performance you get out of it. The ZFS cache does more than being a simple LRU caching. ZFS actively prefetches large chunks of data if it sees reason to (i.e. streaming applications or databases). quote:also could someone explain "you can't currently expand a VDEV's"
|
# ¿ Mar 28, 2008 20:07 |
|
Single disk expansions aren't possible, you'll have to add another array to the pool to increase it. As for expansion getting implemented, it isn't as easy as with RAID-5, where you have to move mostly two stripes per row. Each ZFS file system block is spanned across all devices, because of that, stripe sizes are variable. Adding another drive, ZFS would have to comb through the whole data set and restripe ALL data. Quite dangerous. Last this was brought up in the mailing list, there's stuff being added that would enable such a scenario. Whether it'll be implemented, remains to be seen.
|
# ¿ Mar 29, 2008 19:32 |
|
If you want ZFS, use Solaris if you want stability. It's as unwieldy as FreeBSD. To check whether your hardware is supported or not, download the OpenSolaris Developer Preview, which is a live CD.
|
# ¿ Apr 6, 2008 20:00 |
|
ZFS takes anything that has character device like semantics. Including files.
|
# ¿ May 13, 2008 16:15 |
|
poo poo Copter posted:Thanks. Upgrading in the future is one concern of mine though. Would I absolutely need another 5 disks in order to upgrade/expand, or could I add say 3 to another pool? How exactly does the expansion process go? Would I need to have two seperate arrays, or could I add the other disks to the same array - but on a seperate pool? poo poo Copter posted:Also does anyone know how Solaris is with VMWare server? I'm interested in virtualizing a Windows dev environment on this file server as well. I know it's braineddead easy to setup in CentOS or Debian, but I haven't been able to find any specific info for Solaris. Combat Pretzel fucked around with this message at 19:50 on May 13, 2008 |
# ¿ May 13, 2008 19:48 |
|
poo poo Copter posted:So am I correct in assuming that if I have multiple vdevs in the same pool, that they could be accessible as a single large volume? poo poo Copter posted:Hmm, it sounds like I will have to give VirtualBox a shot. If you've time to fiddle, you should consider using a ZFS ZVOL, though. More efficient that a regular file on ZFS as VM container. Goes something like this, creating a 10GB thin provisioned virtual disk in the ZFS pool (as superuser or using pfexec if your account has the root role): zfs create -s -V 10G pool/windows ls -l /dev/zvol/rdsk/pool/windows (results probably in something like used in the next line) chown youraccount /devices/pseudo/zfs0@1:a,raw (what the ls -l gave you) You need to change the ownership on the symlink target. Adjust permissions as needed, if necessary. Then as regular user: VBoxManage internalcommands createrawvmdk -filename foo.vmdk -rawdisk /dev/zvol/pool/windows The ZVOL should then show up as available disk in the GUI. Combat Pretzel fucked around with this message at 20:13 on May 13, 2008 |
# ¿ May 13, 2008 20:11 |
|
napking posted:i installed 2008.05 on top of the b81 and it's been great so far the past few days. zfs root is awesome!
|
# ¿ May 27, 2008 11:17 |
|
napking posted:do you know if the current version of zfs lets me change a mirrored two disk pool into a raidz pool by adding a third disk? i'd really like to expand this pool without going through hoops. Adding the ability to expand existing RAID-Z arrays is slowly being considered. At least the logistics of this are already being discussed.
|
# ¿ May 28, 2008 22:37 |
|
I remember there being a command to do this, because I've read about installing CIFS on the OpenSolaris forums and executing it, but I fail at digging it up again.
|
# ¿ May 31, 2008 11:53 |
|
ZFS is pooled storage, so that's normal. You should know though that there's currently no vdev remove functionality yet. It's still in work. The way ZFS works in regards to geometry and striping makes it a little more complex than other pooled storage filesystems. So if you want to switch out a vdev, you have to put the new one in, replace the old one with the new one and then only can you remove the old one. The new vdev has to be same size or larger than the old one. Delta-Wye posted:Could I just swap the 250s out one by one and let it rebuild? Would I expect to see 640G or 960G?
|
# ¿ Jun 29, 2008 11:23 |
|
Nam Taf posted:So I am right in assuming that if I have, say, 4x750GB drives, I get 3x750GB space. I then replace them 1 by 1 with 1TB drives, letting the array rebuild itself each time. After I replace the 4th, I should automagically see 3TB of space appear, yes? Delta-Wye posted:I say same amount of sectors. If you were to replace a 250GB drive with another one, which is however a bunch of sectors smaller due to manufacturer geometry differences, ZFS will tell you to go screw yourself. You have to manually replace the drive. If you remove it before inserting the new drive, it'll report the drive missing. It's zpool replace pool olddev newdev. PS, you can't boot from RAID-Z (yet). You'll be needing a small boot drive. Delta-Wye posted:Now if only I knew the mobo I want to use supported freebsd - something tells me it probably doesn't. Also, try going with Solaris first, before trying FreeBSD. If you're serious about using a ZFS fileserver, you should go with the native environment, if you value stability. It isn't a big scary beast. The OpenSolaris image, which coincidentally has a very nice and friendly installer on its LiveCD, will be updated shortly with new bits. It's a 700MB live CD, that also has a device detection tool, so you can see if everything's supported before installing. I'm running Solaris on a X48 mainboard, with NVidia card, bunch of drives, a Xonar DX, a PCIe Intel NIC and all's well supported and hardware accelerated where available. Combat Pretzel fucked around with this message at 15:44 on Jul 2, 2008 |
# ¿ Jul 2, 2008 15:34 |
|
Delta-Wye posted:Thanks for the info. I've used Linux extensively, and OpenBSD and FreeBSD less so (although I like them a bit better) but the few minutes I sat in front of Solaris made me want to kill myself. Perhaps I ought to suck it up and try again! Try the OpenSolaris 2008.05 package. If that's still bothering you, then I don't know.
|
# ¿ Jul 3, 2008 14:23 |
|
Delta-Wye posted:For what it's worth, the last time I ran Solaris was... 2001? 2002? Long before OpenSolaris, I'm pretty sure it only ran on Sparc systems then. And it wasn't installing it, it was just using it that I found distasteful. It just bugged me for whatever reason. OpenSolaris comes as live CD with an installer similar to Ubuntu. If that and Gnome would be bugging you, I don't know. And if you want a stable and fast ZFS server, you better go with Solaris. It's the native runtime environment. Especially because they've also a kernel CIFS implementation based on actual Microsoft documentation, integrated with ZFS, not available on FreeBSD. All in all quite a little bit much faster than this Samba poo poo and worth it if the clients are Windows.
|
# ¿ Jul 4, 2008 14:32 |
|
vanjalolz posted:solaris is weird, i'm trying it in a vm If you're by chance using Solaris 10 (and maybe using CDE), ditch it and go with the Nevada builds. They're pretty stable. Even better, go with OpenSolaris (a Nevada distro), which supports installation of ZFS boot (please get build 93, to avoid boot breakage, introduced between build 86 and build 89, when updating the whole system). Right now, if you're serious about longterm ZFS usage, Solaris is the way to go, until at least that Pawel guy removes the experimental status of the port. He has reasons why it's still marked as such. vanjalolz posted:ZFS boot First of all, ZFS boot is still WIP. The only thing really supported is booting from a single disk pool. Single mirror works, too. RAID-Z and multivdev doesn't work yet, since ZFS doesn't yet know how to keep a complete boot archive on every drive in the pool and make it discoverable (remember, RAID-Z and multivdev are both striped in various ways). ZFS boot phase 1 has only been committed with build 88, that's like 2.5 months ago. Before, it was more of a hack on experimental support. Second, if you intend to create a file server, you should maintain your system on a seperate drive anyway, so exactly do what the troubleshooting says, though I haven't found it necessary yet. Lastly, don't be scared by these so-called memory requirements of ZFS thrown around. These are recommendations for enterprise use. Except that every rear end in a top hat keeps referring to them even to home users. Going with OpenSolaris gets you Boot Environments, which uses ZFS snapshots and clones to play kind of Time Machine and Pit Stop for the operating system. Means you can create clones of your current system at will, boot to each one as you please, update each one seperately and from within other BEs, etc blah blah.
|
# ¿ Jul 17, 2008 19:16 |
|
Utility placement in Solaris is actually the true Unix way, and I guess just a matter of getting used to. BSD is also different from Linux on command line. If you however install the OpenSolaris distro (from opensolaris.com, not .org), you get the GNU tools as default. Anyway, you're currently using the Solaris Express Community Edition. That's fine, you just don't get the OpenSolaris tools like IPS and Boot Environments. Latter allows for way easier upgrading and experimenting. OpenSolaris is also a live CD, so you don't need to install for basic tests. OpenSolaris does ZFS root only, no UFS. And it has an Ubuntu style installer, not the silly CDE one. As said, the memory requirements depend on what you want to do. For a home file server, even a gigabyte is enough. ZFS makes gratuitous use of memory for functions like cache and especially prefetching (it detects linear and stride reads and depending on the pattern prefetches very large chunks (think 100-200MB)). But it can also run in low memory situations. The cache reacts to memory pressure and shrinks if necessary.
|
# ¿ Jul 18, 2008 13:49 |
|
I'm trying to get unRAID. You say polarity (parity?) disk, I suppose it's just simply a RAID-4 with a little awareness of the disk geometries?
|
# ¿ Jul 27, 2008 20:57 |
|
ZFS will write GUIDs onto the drives. If you pull it, the array will drop to degraded state and I think still allow writes. If you put the pulled drive back, it'll recognize it via the GUID and update the drive (the versioned metadata tree approach speeds that up, since it can figure out what changed while the drive was offline). If you add a new drive, you need to manually replace the missing drive by it, using the zpool command. The clean drive resilvering will take a while, since the new drive would be clean.
|
# ¿ Jul 30, 2008 12:03 |
|
Farmer Crack-rear end posted:Why is two small power supplies better than one larger one?
|
# ¿ Aug 4, 2008 11:31 |
|
RAID-Z is equivalent to RAID-5, RAID-Z2 to RAID-6. The number just indicates how many parity stripes per row. Actually, no one says RAID-Z1. As far as onboard goes, what chip? Xorg in it comes with drivers for all sort of Intel chips. It also comes with the opensource ATI driver, but I'm not sure if it's on the initial LiveCD or in one of the updates. If you chose to pkg image-update to get the driver or just get the newest bits, be sure to head over to opensolaris.org and go to the Indiana forum, because there's some manual work required (scoll down to IMPORTANT in the OP) due to ZFS boot changes. Each pkg image-update creates a new boot environment, if the update doesn't please you, you can boot back to the pre-update boot environment and pretend you never updated.
|
# ¿ Aug 6, 2008 10:52 |
|
It supports C- and P-states on Intel, latter is frequency scaling. Activating it seems a mystery though. PowerTOP once suggested it, I've enabled it and it worked. But only for that session. Not sure how it works manually and PowerTOP never suggested it again. Then again, my Core 2 Quad only had C0 (running) and C1 (simple halt), as well only two P-states, i.e. 2.67 GHz (full speed) and 2.0 GHz. Whether that's coming from the CPU or Solaris' power management support, I don't know. But I think it'd be safer to go Intel, since they've Intel developers contributing code for power management and scheduler stuff.
|
# ¿ Aug 18, 2008 16:21 |
|
Hanging around on the OpenSolaris mailing lists a lot. --ninja edit: Where a lot of their developers also post.
|
# ¿ Aug 18, 2008 17:53 |
|
Insane2986 posted:Will RAID 5/6 "wake up" my drives if they are inactive? (I have Vista set to turn drives off after 10 minutes of inactivity to cut down on heat) If you're hoping that the parity drives would spin down if there's no write activity, you'll be out of luck. Parity is spread across drives. The parity stripe resides on a different drive each row. For that matter, access time updates happen on reads too, creating writes.
|
# ¿ Aug 22, 2008 22:58 |
|
vlack posted:Is there a network-aware package manager yet? Does it have third-party packages? Does it replace smpatch and distribute patches to Sun-supported software? Can I use it to upgrade between releases of the operating system like Ubuntu? Does it replace the BFU patching that people did with SXDE? The repo is still kind of empty. All virtually most standard stuff you got with SXCE, minus a few encumbered bits (mostly licensed drivers), are available on it. They're slowly adding stuff to it. I think sanctioned builds of WINE, Transmission, XChat, Songbird and god knows what else are queued for snv_99/100. Well, as far as patching goes, the repo is currently moving and updates biweekly SXCE style. quote:It seems very desktop-focused. Can you install it headless? My fileserver's BIOS supports serial console redirection, which I would prefer. quote:It seems like Murdock wants to use the GNU tools in Indiana. Are the Sun tools still available somewhere or are they completely gone? Combat Pretzel fucked around with this message at 19:32 on Sep 11, 2008 |
# ¿ Sep 11, 2008 19:29 |
|
A two disk RAID5 is a mirror. A XOR Nothing = A.
|
# ¿ Sep 14, 2008 22:24 |
|
sund posted:Thanks. I wanted to end up with a four disk RAID 5 setup wanted to start with the cheapest setup I could. I realize it looks like a crazy question because I always assumed parity was distributed across the disks, not on a dedicated drive.
|
# ¿ Sep 15, 2008 23:13 |
|
TLER is 7 seconds. Consumer level error recovery has generally time outs around 2 minutes. Which is a bitch, because it will have any hardware and software RAID stack declare your single-badly-broken-sector drive as dead if it comes across it.
|
# ¿ Sep 19, 2008 18:24 |
|
vanjalolz posted:.. but now I have like a hundred snapshots and they're starting to take up space. What;s the best way to prune this stuff? Reading the LSARC case files, it appears that at latest with OpenSolaris 2008.11, there'll be an UI and SMF service that takes care about periodic snapshotting and automatic pruning. Meanwhile, there's this, which will apparently be the basis for said stuff above: http://blogs.sun.com/timf/en_IE/entry/zfs_automatic_snapshots_0_11
|
# ¿ Oct 1, 2008 12:55 |
|
pipingfiend posted:I would not run this stuff to download directly to the array as torrents will probably slowly kill it. It's a COW filesystem. Things that get random writes will end up fragmented like poo poo by default. The IO scheduler and prefetcher will compensate for that. Also, ZFS groups all writes into transactions every five seconds or the cache being filled totally with writes.
|
# ¿ Nov 21, 2008 00:24 |
|
The builds of Solaris Express and OpenSolaris are exactly the same, apart from a different installer and latter not shipping with third party licensed bits (nothing you'd miss). For the shiny things, you need OpenSolaris or SXCE. Nexenta is also tracking the latest builds with just a little lag AFAIK, but I've never used it.
|
# ¿ Nov 26, 2008 16:22 |
|
amerrykan posted:When I issue 'zfs sharesmb=on mypool/storage', I receive "cannot share 'mypool/storage': smb add share failed".
|
# ¿ Dec 13, 2008 00:00 |
|
wolrah posted:I really want to use ZFS, but I have a somewhat irrational dislike of Solaris thanks to some old-rear end SPARC boxes I had to use in college.
|
# ¿ Jan 1, 2009 23:33 |
|
Triikan posted:How migrateable are RAID-5s and spanned arrays? I have a hardware 4-Port RAID5 card, but it only allows for 2TB logical drives, so under XP Pro I spanned the two logical drives it created to form one 4TB, spanned array (this basically just writes to the first logical drive, then the second once the first is filled, correct? So it's not a software raid?).
|
# ¿ Feb 1, 2009 12:33 |
|
|
# ¿ Apr 28, 2024 06:51 |
|
Triikan posted:Will this cause me any problems? It's still RAID5'ed.
|
# ¿ Feb 1, 2009 19:47 |