|
The_Frag_Man posted:Does anyone know if the Asus Crosshair IV would work with OpenSolaris? FISHMANPET posted:Now if they would just change the ZFS license to GPL it could be in the Linux Kernel and I'd be happy as a clam.
|
# ? Jul 31, 2010 18:20 |
|
|
# ? May 26, 2024 08:12 |
|
necrobobsledder posted:Given Oracle owns the rights to ZFS now basically, not a snowflake's chance in hell.
|
# ? Jul 31, 2010 21:54 |
|
I've been looking at the Synology DS110j. I don't know much about NAS and have been reading up. I have read performance is to be expected (not great).. but does this means in terms for file read and write speeds via wireless? I plan on putting a 2TB and have it sit in the other room which I will have it hooked up to the modem. My computer will be in the next room. I wanted to know what is the expected performance on file transfer wireless and wired to the wireless router? Another possible setup would be having my iMac via cat5 to wireless router along with the NAS. WOuld file transfer between the two much faster? if so.. what can I expect? 75MB/sec? Also how is the DS110j in terms of hosting a FTP and running torrents? How many torrents is it able to run? Thanks for hte help.
|
# ? Aug 1, 2010 04:46 |
|
Combat Pretzel posted:You have to change your repository manually to update to developer builds from a stable one. If you managed to update to a dev build, you didn't do it by accident. If you're at B134 you can switch over to the standard repo. Once you go past 134 you can no longer switch until the next stable repo comes out. So they don't update the dev repo until the stable release comes out, so people that want to switch to stable don't accidently get brushed ahead.
|
# ? Aug 1, 2010 05:02 |
|
Is it possible to run opensolaris in a VM? Is it possible to assign raw disks directly to the VM?
|
# ? Aug 1, 2010 05:36 |
|
The_Frag_Man posted:Is it possible to run opensolaris in a VM? Is it possible to assign raw disks directly to the VM? Depends on the VM. OpenSolaris will work for sure in VirtualBox, but not sure if that has passthrough ability.
|
# ? Aug 1, 2010 06:38 |
|
You can do it with ESXi and a motherboard that supports VT-D, you can read about it here http://www.napp-it.org/napp-it/all-in-one/index_en.html . I want to build a storage server with this method so bad, it has so many advantages.
|
# ? Aug 1, 2010 20:03 |
|
Perplx posted:You can do it with ESXi and a motherboard that supports VT-D, you can read about it here http://www.napp-it.org/napp-it/all-in-one/index_en.html . I want to build a storage server with this method so bad, it has so many advantages. Don't really need VT-d for raw device mapping: http://vm-help.com/esx40i/SATA_RDMs.php I've got a Gentoo VM publishing NFS this way and it's great. NeuralSpark fucked around with this message at 20:34 on Aug 1, 2010 |
# ? Aug 1, 2010 20:31 |
|
Initial NexentaStor impressions: CIFS performance is much better than FreeNAS. Interface is not nearly as intuitive, and lacks adequate protection against doing bad things. I accidentally wiped my data trying to reset my CIFS setup. Does anyone know of any ZFS file recovery tools?
|
# ? Aug 2, 2010 00:07 |
|
The_Frag_Man posted:Is it possible to run opensolaris in a VM? Is it possible to assign raw disks directly to the VM? I've got opensolaris running on a VM with 2x1tb disks running off it. So far it works great
|
# ? Aug 2, 2010 01:47 |
|
Zhentar posted:Initial NexentaStor impressions: CIFS performance is much better than FreeNAS. Interface is not nearly as intuitive, and lacks adequate protection against doing bad things. I accidentally wiped my data trying to reset my CIFS setup. I don't think there ARE any. If you manage to do something stupid, the zfs guides all tell you that this is what those backups you made are for. With the complex geometry of the disks, and the modify on write architecture of the ZFS filesystem, I'm pretty sure you're screwed.
|
# ? Aug 2, 2010 03:31 |
|
OatmealRocks posted:I've been looking at the Synology DS110j. I don't know much about NAS and have been reading up. I have read performance is to be expected (not great).. but does this means in terms for file read and write speeds via wireless? Speeds are here, http://www.synology.com/enu/products/1bay_perf.php WiFi is always going to be limited, you need tri-band 802.11n for best performance and it's not easy finding that.
|
# ? Aug 2, 2010 04:01 |
|
dj_pain posted:I've got opensolaris running on a VM with 2x1tb disks running off it. So far it works great I've got OpenSolaris running an Ubuntu VM with VirtualBox, and I'm desperately seeking a new solution. For some reason, bridging the one physical NIC into two virtual ones is causing havoc with my networking. It just drops the connection frequently, and it gets really bad until I reboot it once a week. I had this same problem with Xen and OpenSolaris is Dom0. So I'm not sure if I want to go the ESXi route or Ubuntu host with something else hosing the OpenSolaris VM.
|
# ? Aug 2, 2010 04:48 |
|
FISHMANPET posted:I've got OpenSolaris running an Ubuntu VM with VirtualBox, and I'm desperately seeking a new solution. For some reason, bridging the one physical NIC into two virtual ones is causing havoc with my networking. It just drops the connection frequently, and it gets really bad until I reboot it once a week. I had this same problem with Xen and OpenSolaris is Dom0. So I'm not sure if I want to go the ESXi route or Ubuntu host with something else hosing the OpenSolaris VM. Get an intel card. My e1000 based Intel NIC handles the multiple VMs fine. Apparently the injection they use makes some drivers poo poo themselves.
|
# ? Aug 2, 2010 06:19 |
|
Methylethylaldehyde posted:With the complex geometry of the disks, and the modify on write architecture of the ZFS filesystem, I'm pretty sure you're screwed. ZFS uses a copy-on-write architecture, which means if the delete were a single, atomic step, it would be pretty much guaranteed that all of the data were still there, perfectly preserved, and restoring it all would be a fairly simple act of going back one transaction. Now, given the amount of disk activity I could here going on, I don't think what went down was as simple as a single delete, but since there was no other activity afterward, most if not all of the needed file system data should be around. For now, I've just ordered a couple new 2TB drives that I'll restore things onto, but I'll definitely be taking a stab at recovering from the deleted drives on a rainy day.
|
# ? Aug 2, 2010 06:20 |
|
Zhentar posted:ZFS uses a copy-on-write architecture, which means if the delete were a single, atomic step, it would be pretty much guaranteed that all of the data were still there, perfectly preserved, and restoring it all would be a fairly simple act of going back one transaction. Now, given the amount of disk activity I could here going on, I don't think what went down was as simple as a single delete, but since there was no other activity afterward, most if not all of the needed file system data should be around. I knew it was one or the other. Mostly the variable width stripes are what's going to gently caress you in the recovery. Especially if the geometry information was wiped.
|
# ? Aug 2, 2010 06:55 |
|
Methylethylaldehyde posted:Get an intel card. My e1000 based Intel NIC handles the multiple VMs fine. Apparently the injection they use makes some drivers poo poo themselves. Yep that's what I have as well. ESXi doesn't support realtek nic out of the box so I just ordered two e1000.
|
# ? Aug 2, 2010 07:05 |
|
Methylethylaldehyde posted:Get an intel card. My e1000 based Intel NIC handles the multiple VMs fine. Apparently the injection they use makes some drivers poo poo themselves. I've had this same problem with two separate Intel NICs.
|
# ? Aug 2, 2010 07:11 |
|
FISHMANPET posted:I've had this same problem with two separate Intel NICs. What was the driver used by the cards? Are the cards themselves on the HCL? Osol is hilariously picky about some stuff.
|
# ? Aug 2, 2010 07:17 |
|
Zhentar posted:Does anyone know of any ZFS file recovery tools? zpool import -f poolname On command line.
|
# ? Aug 2, 2010 12:06 |
|
Combat Pretzel posted:After 8 seconds of inactivity, the heads are parked off-platter onto a ramp. Next IO, they're unparked again until the idle time-out is reached again. If you don't stress your drive well enough, you'll be racking up a lot of these (un-)load cycles. The drives are only rated 50000 or so. I was just getting ready to move up to a NAS at home when I read this. So now I'm worried. I was looking at grabbing a synology 210j or 410j, and some new drives to install. However, from what I can tell from this discussion there's two potential issues: TLER (which will gently caress up the RAID) and Idle head parking (which will shorten the lifespan). So questions - - Is idle head parking just a WD thing? If I read correctly this can be overcome by the NAS assuming it can manipulate advance power management settings? Does the synology do this (I would assume it does but I can't easily find it on their page)? - The TLER issue is not something that can be fixed by changing NAS settings? So I'd need to find drives that either don't have it or where it can be disabled - is there any resource that's compiled that information yet? EDIT: From what I could google up it looks like TLER isn't an issue for most of the software-NAS devices that are discussed in here: http://www.smallnetbuilder.com/nas/nas-features/31202-should-you-use-tler-drives-in-your-raid-nas quote:The responses I received from Synology, QNAP, NETGEAR and Buffalo all indicated that their NAS RAID controllers don't depend on or even listen to TLER, CCTL, ERC or any other similar error recovery signal from their drives. Instead, their software RAID controllers have their own criteria for drive timeouts, retries and when a drive is finally marked bad. Crackbone fucked around with this message at 14:05 on Aug 2, 2010 |
# ? Aug 2, 2010 13:50 |
|
Methylethylaldehyde posted:I knew it was one or the other. Mostly the variable width stripes are what's going to gently caress you in the recovery. Especially if the geometry information was wiped. Yeah, that would complicate things. Though I have the benefit of knowing the correct file size for everything, so that could help fill in some gaps. Combat Pretzel posted:Try: Unfortunately, that doesn't do me any good because the pool is just fine. I think what went down was a 'zfs destroy' Edit: it looks like zpool history will at least tell me what happened. Zhentar fucked around with this message at 16:49 on Aug 2, 2010 |
# ? Aug 2, 2010 16:41 |
|
Does anyone have any experience with sil3132 based pcie(1x) to sata (with raid) cards? After updating my mobo bios to latest possible (albeit 3 years old, as it's an old s939 board), the system doesn't post with the card install, no display, no beeps, nothing. Without the card, boots fine. Just trying to get it up and recognized in windows 7 to ensure the card's bios is up to date before trying again in opensolaris. I also tried throwing it in my p55/1156 based msi board, with the same result. Bad card?
|
# ? Aug 2, 2010 22:32 |
|
Crackbone posted:I was just getting ready to move up to a NAS at home when I read this. So now I'm worried.
|
# ? Aug 2, 2010 22:54 |
|
Wanderer89 posted:Does anyone have any experience with sil3132 based pcie(1x) to sata (with raid) cards? If it does the same thing in your P55 board, yeah, dead or otherwise defective card. I've had weird controller conflicts before with SiI chips and onboard controllers (I think it was an Asus board with its own older SiI on it) but you shouldn't have that happen with a newer P55 board.
|
# ? Aug 2, 2010 23:00 |
|
Star War Sex Parrot posted:All of this head-parking, WDIDLE, TLER confusion is why I only use enterprise drives, even in my home setups. Unfortunately they're way more expensive. Yeah, I know it's a minefield but setting up a NAS with enterprise drives is just not in my price range, not when it means I'm looking at $600 minimum for 2TB of mirrored storage.
|
# ? Aug 2, 2010 23:01 |
|
use zfs or mdadm and it's not that big of a deal. Heroic recovery will be a non issue, and the drives have a 3 year warranty, assuming you have parity you should have no problems getting the replacement RMA drive. If you plan to keep your drives for more than 3 years, well, I don't know what to tell you.
|
# ? Aug 3, 2010 00:05 |
|
adorai posted:use zfs or mdadm and it's not that big of a deal. Heroic recovery will be a non issue, and the drives have a 3 year warranty, assuming you have parity you should have no problems getting the replacement RMA drive. If you plan to keep your drives for more than 3 years, well, I don't know what to tell you. For paranoia purposes I got a hot spare for my 4+1 RAIDZ pool.
|
# ? Aug 3, 2010 00:48 |
|
dj_pain posted:Yep that's what I have as well. ESXi doesn't support realtek nic out of the box so I just ordered two e1000. What method did you use to pass through the disks to open solaris?
|
# ? Aug 3, 2010 10:44 |
|
Crackbone posted:Yeah, I know it's a minefield but setting up a NAS with enterprise drives is just not in my price range, not when it means I'm looking at $600 minimum for 2TB of mirrored storage.
|
# ? Aug 3, 2010 16:06 |
|
Combat Pretzel posted:Anything that's claiming to be green or power-saving will do things like idle parking. Get some WD Black or the equivalent from other manufacturers. Thanks. I think I'm going to grab a couple more Spinpoint F3s and repurpose the ones I have in my PC now.
|
# ? Aug 3, 2010 16:13 |
|
Combat Pretzel posted:Anything that's claiming to be green or power-saving will do things like idle parking. Get some WD Black or the equivalent from other manufacturers. My Samsung HD154s haven't done this - smartctl output: code:
|
# ? Aug 3, 2010 18:56 |
|
Well, CIFS small-file performance is massively improved switching to NexentaStor from FreeNAS; now I would consider things acceptable. But write throughput is still unimpressive; I'm usually getting 200-250 mb/s network throughput, but it stutters a lot, cutting out for a couple seconds at a time. Edit: Looked at the Analytics tab in Nexenta. My transfer cutting out for a few seconds lines up perfectly with SPA sync calls taking several seconds. Research suggests all ZFS activity halts during SPA syncs. Now I just have to figure out what the hell I'm supposed to do about it. Zhentar fucked around with this message at 05:43 on Aug 4, 2010 |
# ? Aug 4, 2010 04:55 |
|
And of course it was the SATA controller. The Sil3124 may be supported, but that doesn't mean it will work well. Now, I'm sustaining 40MB/s, which is good enough for me.
|
# ? Aug 4, 2010 07:00 |
|
IOwnCalculus posted:My Samsung HD154s haven't done this - smartctl output:
|
# ? Aug 4, 2010 17:01 |
|
This may be a question previously answered, if so, my apologies. I'm specing a replacement NTFS file-server. Our current server is an old PowerEdge 2650 with 6x147GB U320 drives in a RAID5. Our new server is an HP StorageWorks x1600 with 12 bays of SAS/SATA storage. For budget reasons, I'm planning on filling the 12 bays with Seagate 1TB ES drives and running a 6+0, producing about 8TB of space. IS this a good plan, or should I get 6GB/s drives (the controller can handle these), probably smaller ones around 640GB? Option B is to have two mirrored arrays of 6 drives each in an R6. Our data is small, currently around 600GB, so rebuilds won't take too long for a while. Thanks
|
# ? Aug 4, 2010 17:07 |
|
Combat Pretzel posted:The parking is counted in 'Load Cycle Count'. Huh; must not be tracked by the Samsung drives then, I don't have that at all. Here's smartctl -A for one of them: code:
|
# ? Aug 4, 2010 17:44 |
|
A 2.5" sata drive should work in a drobo with the proper adators right?
|
# ? Aug 5, 2010 06:47 |
|
Are there any SMART monitoring programs/scrpts that will run over HTTP or generate a php page like phpsysinfo?
|
# ? Aug 5, 2010 08:11 |
|
|
# ? May 26, 2024 08:12 |
|
The_Frag_Man posted:What method did you use to pass through the disks to open solaris? Just follow this guide
|
# ? Aug 5, 2010 09:00 |