|
User 39204 posted:This is honestly a big part of why we run FC. You're missing the big picture. You offload the traffic to the network guys so when they break your storage network you get overtime. It's a win win!
|
# ¿ Oct 18, 2012 23:00 |
|
|
# ¿ May 12, 2024 05:28 |
|
skipdogg posted:What is this 'overtime' you speak of? This email that I just got right here is what it is: "Thanks Nomex. At this time I am leaning towards servers loosing connection/communication and not noticing. One of the major changes was in the teaming of the Nexus switches....."
|
# ¿ Oct 18, 2012 23:18 |
|
I know what he meant, It was just good timing with that email.
|
# ¿ Oct 18, 2012 23:22 |
|
adorai posted:Our nexus switches, layer 2 only, cost us somewhere around $30k for the pair, each with 32 ports. Why would you even consider looking at dedicated fibre channel infrastructure when you would need to spend over $1k per port for FC? If you have the end to end hardware to support FCoE, there really isn't much reason to have a dedicated FC network anymore IMO.
|
# ¿ Oct 20, 2012 17:39 |
|
cheese-cube posted:I haven't seen that picture but I'm imagining a server with a bunch of 4-port USB PCI cards and a bazillion external USB HDDs plugged into it. Please tell me that is the case because it would be hilarious. No joke, one of my clients used to use USB drives essentially as backup tapes. They had something like 160 Lacie dual 500GB USB drives that they hooked to 6 separate servers, one for each day of the week and one for the weekend. The shipping case to take them offsite was immense and the failure rate was spectacular.
|
# ¿ Oct 25, 2012 05:04 |
|
If you have a virtualized workload that requires high IO you should use a raw device map for your storage disk. You decrease your available IO when you slap VMFS on a disk, whereas all you have to deal with is your underlying storage file system if you just RDM it.
|
# ¿ Nov 12, 2012 17:41 |
|
With Netapp you can have a large aggregate of spinning disk with some SSDs mixed in for cache. Blocks are moved up into the flash pool when they get busy, then get de-staged when they stop being accessed. It works kinda like their PAM cards, only it's read/write and aggregate specific rather than system wide. Another way you might want to approach it is to use something like an HP DL980 with 14 Fusion IO cards and whatever your favorite flavor of virtual storage appliance is. How big is your active dataset per workstation? Nomex fucked around with this message at 01:46 on Nov 28, 2012 |
# ¿ Nov 28, 2012 01:38 |
|
szlevi posted:Yeah, that 'moving blocks'/tiering approach never worked, never will for this type of thing, I can tell you that already. Sorry, I shouldn't have said move in relation to blocks. Flash pool is a caching system. It doesn't do any tiering. Reads and overwrites are cached, but the flash pool is consistent with the disks. Just out of curiosity, why are you using CIFS? Why not mount a LUN instead? You can slap a dual 8 gig FC HBA in and pull way way higher throughput than using CIFS. How many clients are running at a time and what kind of budget do you have for this?
|
# ¿ Nov 29, 2012 06:33 |
|
I can vouch for Oncommand. I honestly don't know how I used Netapp hardware without it now.
|
# ¿ Dec 5, 2012 19:51 |
|
evil_bunnY posted:What happened after? 1 in 3 companies that suffer a catastrophic data loss never recover.
|
# ¿ Jan 18, 2013 07:43 |
|
madsushi posted:I have been doing some NetApp vs Nimble comparisons lately, and it seems like there is one feature on Nimble that I don't quite understand. Nimble claims that their method of coalescing random writes into sequential stripes is somehow much faster than NetApp, and in fact Nimble claims that their write methods are up to 100x faster than others. I don't really see how this is possible. Can anyone with Nimble experience/knowledge add any insight? With spinning disk, if you're making a long sequential write you get high bandwidth, but the second the head has to start jumping around the platter your performance tanks. A 3.5" 7200RPM SATA drive will do somewhere between about 80 and 130 MB/s on a sequential write, but will do somewhere around 500KB/s-1.5MB/s on 4k random writes. This is due to rotational latency introduced when the head has to wait for data to come around again. The Nimble array would still have to do uncached reads from the disk, which would bring the performance down somewhat, but aligning all the writes into sequential order would definitely boost write performance. Netapp Flashpool caches overwrites though, so if you have that, it really helps with write performance anyways.
|
# ¿ Jan 29, 2013 06:47 |
|
adorai posted:Compounding the problem is that we have to provide VDI for 250 seats in about 3 months or less. We are prepared with a working solution, but don't have the SAN capacity. Our choices are: The method that we took was to load a FusionIO card into each of our VDI servers, then served up the production VDI image from there. All the dev stuff is still on spindles, but everyone boots from the FusionIO card. We were originally going to go with 512GB of extra flashcache and a couple of 15k DS4243s, but the flash backed production image works great and cost less.
|
# ¿ Feb 8, 2013 22:10 |
|
pr0digal posted:While I read through the thread I figured I would pose a question. Do you know what kind of IO requirements you're looking at? Given that it's video editing I'd guess long sequential reads and writes, but how much data does each system work on at one time? Are you able to grab any metrics off the XSAN?
|
# ¿ Feb 14, 2013 03:41 |
|
Misogynist posted:I'm curious, do any of you other guys in the tiering discussion work in an environment with petabytes-range of raw data? SSD caching is great for applications with repeatable hotspots, but it's going to be a long time before it can play with the HPC kids. SSD caching is at best a fad. SSD prices are quickly getting to price parity with spinning disk. In less than 3 years it will probably be just as cheap to outfit an array with solid SSDs and skip the slow tier all together. I can't wait for the day when IO sizing is essentially a thing of the past.
|
# ¿ Apr 16, 2013 00:03 |
|
|
# ¿ May 12, 2024 05:28 |
|
cheese-cube posted:I'm pretty sure that dedupe is turned on but I'd have to check with the storage team. Is there a general best-practice for snapshot capacity requirements based on volume size? It really depends on your snapshot schedule and volume deltas. 20% is a good starting point generally.
|
# ¿ May 23, 2013 23:50 |