- Shit Copter
- Oct 13, 2004
-
what a P.O.S.
|
Could any point out any good tutorials for setting up md or Z-RAID arrays? Some pro v con would be nice for this as well. I just snagged 5 750s, a stacker case and one of those 5.25" backplanes, so I'm good on hardware. But I'm a network guy so my storage knowledge is limited to basics and theory.
Also what is the general consensus on these backplanes?
|
#
¿
May 13, 2008 05:36
|
|
- Adbot
-
ADBOT LOVES YOU
|
|
#
¿
May 2, 2024 08:22
|
|
- Shit Copter
- Oct 13, 2004
-
what a P.O.S.
|
H110Hawk posted:zpool stuff
Thanks. Upgrading in the future is one concern of mine though. Would I absolutely need another 5 disks in order to upgrade/expand, or could I add say 3 to another pool? How exactly does the expansion process go? Would I need to have two seperate arrays, or could I add the other disks to the same array - but on a seperate pool?
Also does anyone know how Solaris is with VMWare server? I'm interested in virtualizing a Windows dev environment on this file server as well. I know it's braineddead easy to setup in CentOS or Debian, but I haven't been able to find any specific info for Solaris.
|
#
¿
May 13, 2008 19:38
|
|
- Shit Copter
- Oct 13, 2004
-
what a P.O.S.
|
Toiletbrush posted:
To expand a pool, you throw additional vdevs into it. How the vdevs are made up isn't important. They can be files, single disks, RAID-Z arrays or mirrors (latter two are considered single vdevs). Two RAID_Z's in a pool don't need to match in size or amount of disks, either. Not does the type of vdevs need to. You can mix mirrors with RAID-Z's in a pool. If there are multiple vdevs (e.g. two RAID-Z arrays), ZFS spreads the writes across them, influenced by metrics like available write bandwidth and available free space.
Thanks for the reply.
So am I correct in assuming that if I have multiple vdevs in the same pool, that they could be accessible as a single large volume?
quote:To use Solaris as virtualization host, your option are either using a Nevada build (Solaris Express any recent edition, OpenSolaris 2008.05) as Dom0 on Xen, or use VirtualBox 1.6. I figure VirtualBox would be the better option for you. It also comes with guest drivers for Windows, speeding things up quite a bit, plus seamless mode to merge the Windows desktop into your Solaris desktop. There's no VMware for Solaris (yet?)
Hmm, it sounds like I will have to give VirtualBox a shot.
|
#
¿
May 13, 2008 19:53
|
|
- Shit Copter
- Oct 13, 2004
-
what a P.O.S.
|
Help! I'm having problems creating my LVM virtual device. Any ideas? I'm not sure what other info would help.
code:root@spice:/home/justin# lvcreate -l 44712 raid -n lvm0
File descriptor 3 left open
File descriptor 4 left open
File descriptor 5 left open
File descriptor 7 left open
/proc/misc: No entry for device-mapper found
Is device-mapper driver missing from kernel?
Failure to communicate with kernel device-mapper driver.
/proc/misc: No entry for device-mapper found
Is device-mapper driver missing from kernel?
Failure to communicate with kernel device-mapper driver.
Incompatible libdevmapper 1.02.20 (2007-06-15)(compat) and kernel driver
striped: Required device-mapper target(s) not detected in your kernel
lvcreate: Create a logical volume
lvcreate
[-A|--autobackup {y|n}]
[--addtag Tag]
[--alloc AllocationPolicy]
[-C|--contiguous {y|n}]
[-d|--debug]
[-h|-?|--help]
[-i|--stripes Stripes [-I|--stripesize StripeSize]]
{-l|--extents LogicalExtentsNumber |
-L|--size LogicalVolumeSize[kKmMgGtTpPeE]}
[-M|--persistent {y|n}] [--major major] [--minor minor]
[-m|--mirrors Mirrors [--nosync] [--corelog]]
[-n|--name LogicalVolumeName]
[-p|--permission {r|rw}]
[-r|--readahead ReadAheadSectors]
[-R|--regionsize MirrorLogRegionSize]
[-t|--test]
[--type VolumeType]
[-v|--verbose]
[-Z|--zero {y|n}]
[--version]
VolumeGroupName [PhysicalVolumePath...]
lvcreate -s|--snapshot
[-c|--chunksize]
[-A|--autobackup {y|n}]
[--addtag Tag]
[--alloc AllocationPolicy]
[-C|--contiguous {y|n}]
[-d|--debug]
[-h|-?|--help]
[-i|--stripes Stripes [-I|--stripesize StripeSize]]
{-l|--extents LogicalExtentsNumber[%{VG|LV|FREE}] |
-L|--size LogicalVolumeSize[kKmMgGtTpPeE]}
[-M|--persistent {y|n}] [--major major] [--minor minor]
[-n|--name LogicalVolumeName]
[-p|--permission {r|rw}]
[-r|--readahead ReadAheadSectors]
[-t|--test]
[-v|--verbose]
[--version]
OriginalLogicalVolume[Path] [PhysicalVolumePath...]
root@spice:/home/justin#
|
#
¿
May 27, 2008 02:35
|
|
- Shit Copter
- Oct 13, 2004
-
what a P.O.S.
|
Edit: I figured it out, thanks.
Edit2: Still getting this:
code:root@spice:/sbin# lvcreate -l 44712 raid -n lvm0
File descriptor 3 left open
File descriptor 4 left open
File descriptor 5 left open
File descriptor 6 left open
Logical volume "lvm0" created
root@spice:/sbin# lvdisplay /dev/raid/lvm0
File descriptor 3 left open
File descriptor 4 left open
File descriptor 5 left open
File descriptor 6 left open
--- Logical volume ---
LV Name /dev/raid/lvm0
VG Name raid
LV UUID 3DCAxo-202D-EMBN-4NRm-m7ez-SOa9-GZSrbq
LV Write Access read/write
LV Status available
# open 0
LV Size 2.73 TB
Current LE 44712
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
Shit Copter fucked around with this message at 04:02 on May 27, 2008
|
#
¿
May 27, 2008 03:45
|
|
- Shit Copter
- Oct 13, 2004
-
what a P.O.S.
|
Alowishus posted:
I assume you're referring to the "File descriptor X left open" errors... those don't look right... but can you mkfs on the new LV? What version of Linux are you working with here?
Well, that and "# open 0". I'm on Ubuntu 8.04 server. Haven't tried mkfs yet because of those errors. I'll try it in the morning.
Shit Copter fucked around with this message at 05:29 on May 27, 2008
|
#
¿
May 27, 2008 05:25
|
|
- Shit Copter
- Oct 13, 2004
-
what a P.O.S.
|
Edit: figured it out.
Shit Copter fucked around with this message at 05:45 on May 30, 2008
|
#
¿
May 30, 2008 03:26
|
|
- Adbot
-
ADBOT LOVES YOU
|
|
#
¿
May 2, 2024 08:22
|
|
- Shit Copter
- Oct 13, 2004
-
what a P.O.S.
|
I had installed those packages, but hadn't tried rebooting...
Thanks
|
#
¿
May 30, 2008 05:05
|
|