Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nomex
Jul 17, 2002

Flame retarded.

User 39204 posted:

This is honestly a big part of why we run FC.

You're missing the big picture. You offload the traffic to the network guys so when they break your storage network you get overtime. It's a win win!

Adbot
ADBOT LOVES YOU

Nomex
Jul 17, 2002

Flame retarded.

skipdogg posted:

What is this 'overtime' you speak of?

This email that I just got right here is what it is:

"Thanks Nomex. At this time I am leaning towards servers loosing connection/communication and not noticing. One of the major changes was in the teaming of the Nexus switches....."

Nomex
Jul 17, 2002

Flame retarded.
I know what he meant, It was just good timing with that email.

Nomex
Jul 17, 2002

Flame retarded.

adorai posted:

Our nexus switches, layer 2 only, cost us somewhere around $30k for the pair, each with 32 ports. Why would you even consider looking at dedicated fibre channel infrastructure when you would need to spend over $1k per port for FC?
Last time i checked, we were maxing our 10G ports out at around 1gbps each. During the backups we spiked a little higher, but not much. I somewhat regret spending the cash for 10G, until I look at the back of my racks and see half as many cables.

If you have the end to end hardware to support FCoE, there really isn't much reason to have a dedicated FC network anymore IMO.

Nomex
Jul 17, 2002

Flame retarded.

cheese-cube posted:

I haven't seen that picture but I'm imagining a server with a bunch of 4-port USB PCI cards and a bazillion external USB HDDs plugged into it. Please tell me that is the case because it would be hilarious.

No joke, one of my clients used to use USB drives essentially as backup tapes. They had something like 160 Lacie dual 500GB USB drives that they hooked to 6 separate servers, one for each day of the week and one for the weekend. The shipping case to take them offsite was immense and the failure rate was spectacular.

Nomex
Jul 17, 2002

Flame retarded.
If you have a virtualized workload that requires high IO you should use a raw device map for your storage disk. You decrease your available IO when you slap VMFS on a disk, whereas all you have to deal with is your underlying storage file system if you just RDM it.

Nomex
Jul 17, 2002

Flame retarded.
With Netapp you can have a large aggregate of spinning disk with some SSDs mixed in for cache. Blocks are moved up into the flash pool when they get busy, then get de-staged when they stop being accessed. It works kinda like their PAM cards, only it's read/write and aggregate specific rather than system wide.

Another way you might want to approach it is to use something like an HP DL980 with 14 Fusion IO cards and whatever your favorite flavor of virtual storage appliance is.

How big is your active dataset per workstation?

Nomex fucked around with this message at 01:46 on Nov 28, 2012

Nomex
Jul 17, 2002

Flame retarded.

szlevi posted:

Yeah, that 'moving blocks'/tiering approach never worked, never will for this type of thing, I can tell you that already. :)
As for being sequential - it's only per file but you have several users opening and closing plenty of different files etc so it's far from the usual video editing load.
Size can vary from few gigabytes to 50-100GB per file (think of 4k and higher, RGBA fp32 etc) - we've developed our own raw format, wrote plugins for Fusion, Max etc so we're actually pretty flexible if it comes down to that...

FWIW I have 2 Fuison-IO Duo cards, they were very fast when I bought them for $15k apiece, now they are just fast but the issue from day 1 is Windows CIFS: up to 2008 R2 (SMB2.1) CIFS is an utter piece of poo poo, it simply chops up everything into 1-3k size pieces so it pretty much destroys any chance of taking advantage of its bandwidth.
Just upgraded my NAS boxes (Dell NX3000s) to Server 2012, I'll test SMB3.0 with direct FIO shares again - I'm sure it's got better but I doubt it's got that much better...

Since going with an established big name would be very expensive (10GB/s!) as I see I have to choose between two approaches:
1. building my own Server 2012-based boxes eg plugging in 3-4 2GB/s or faster PCIe storage cards, most likely running the shebang as a file sharing cluster (2012 got a new active-active scale-out mode), hoping SMB3.0 got massively better
2. going with some new solution, coming from a new, small firm, hoping they will be around or bought up - and only, of course, after acquiring a demo unit to see real performance

I can also wait until Dell etc buys up a company/rolls out something new but who knows when they will have affordable 10GB/s...?

Sorry, I shouldn't have said move in relation to blocks. Flash pool is a caching system. It doesn't do any tiering. Reads and overwrites are cached, but the flash pool is consistent with the disks.

Just out of curiosity, why are you using CIFS? Why not mount a LUN instead? You can slap a dual 8 gig FC HBA in and pull way way higher throughput than using CIFS. How many clients are running at a time and what kind of budget do you have for this?

Nomex
Jul 17, 2002

Flame retarded.
I can vouch for Oncommand. I honestly don't know how I used Netapp hardware without it now.

Nomex
Jul 17, 2002

Flame retarded.

evil_bunnY posted:

What happened after?

1 in 3 companies that suffer a catastrophic data loss never recover.

Nomex
Jul 17, 2002

Flame retarded.

madsushi posted:

I have been doing some NetApp vs Nimble comparisons lately, and it seems like there is one feature on Nimble that I don't quite understand. Nimble claims that their method of coalescing random writes into sequential stripes is somehow much faster than NetApp, and in fact Nimble claims that their write methods are up to 100x faster than others. I don't really see how this is possible. Can anyone with Nimble experience/knowledge add any insight?

With spinning disk, if you're making a long sequential write you get high bandwidth, but the second the head has to start jumping around the platter your performance tanks. A 3.5" 7200RPM SATA drive will do somewhere between about 80 and 130 MB/s on a sequential write, but will do somewhere around 500KB/s-1.5MB/s on 4k random writes. This is due to rotational latency introduced when the head has to wait for data to come around again. The Nimble array would still have to do uncached reads from the disk, which would bring the performance down somewhat, but aligning all the writes into sequential order would definitely boost write performance.

Netapp Flashpool caches overwrites though, so if you have that, it really helps with write performance anyways.

Nomex
Jul 17, 2002

Flame retarded.

adorai posted:

Compounding the problem is that we have to provide VDI for 250 seats in about 3 months or less. We are prepared with a working solution, but don't have the SAN capacity. Our choices are:

Add more spindles to both our Production and DR Netapp ($50k for each site)
Add Flash Cache to both our Production and DR Netapp ($50k for each site)
Purchase another SAN that is low capacity but high IO and a DR counterpart ($Unknown)

Since I won't need any features beyond 10Gbe iSCSI and high IO, do you think I am better off upgrading my Netapp and paying their premium, or should I look elsewhere? I am not sure if I can actually beat an outlay of $100k for both sides, but I was thinking that it might be possible. What vendors should I be looking at? I've inquired about pricing on a 6TB Oracle 7320, but won't have a quote until next week. Anyone have any pricing experience on that gear?

The method that we took was to load a FusionIO card into each of our VDI servers, then served up the production VDI image from there. All the dev stuff is still on spindles, but everyone boots from the FusionIO card. We were originally going to go with 512GB of extra flashcache and a couple of 15k DS4243s, but the flash backed production image works great and cost less.

Nomex
Jul 17, 2002

Flame retarded.

pr0digal posted:

While I read through the thread I figured I would pose a question.

I do IT for a media company and to be honest I know jack poo poo about SAN solutions aside from the Apple XSAN with 336 TB of Peagus RAIDs attached we're currently running along with two ActiveRAIDs running SANMp. Everything is over Fibre controlled by those wonderful QLogic boxes though the Xsan requires two ethernet drops for its metadata control. We got an expansion coming up in a different building and our ActiveRAIDs are pretty much at capacity and I'm really starting to hate SANMp. We edit 1080i video encoded in Apple ProRes 422 (LT) with a data rate of around 100 megs/s and usually involving multiple streams plus heavy effect work sometimes. To the point we've managed to piss off the XSAN every so often.

My question to you all is what are some viable solutions for our needs. We'll probably have 10 machines hooked up to this SAN and only need between 60 and 72TB of storage. Fibre is preferable due to our bandwidth needs but from reading the thread Ethernet can do just as well. The main thing we're gunning for is an ISIS 5000 system from Avid because as a department we are looking to switch to Avid from Final Cut in the near future and since the ISIS system is kind of all-in-one for AVID that would be great! But of course we have to convince upper management of that and upper management always needs alternatives. I'm hesitant to look at Active Storage solutions because they are possibly going out of business and I have EMC breathing down my neck about their Isilion system. So I would appreciate some suggestions regarding affordable storage solutions for a media company because honestly I'm in over my head when it comes to SAN solutions.

Do you know what kind of IO requirements you're looking at? Given that it's video editing I'd guess long sequential reads and writes, but how much data does each system work on at one time? Are you able to grab any metrics off the XSAN?

Nomex
Jul 17, 2002

Flame retarded.

Misogynist posted:

I'm curious, do any of you other guys in the tiering discussion work in an environment with petabytes-range of raw data? SSD caching is great for applications with repeatable hotspots, but it's going to be a long time before it can play with the HPC kids.

SSD caching is at best a fad. SSD prices are quickly getting to price parity with spinning disk. In less than 3 years it will probably be just as cheap to outfit an array with solid SSDs and skip the slow tier all together. I can't wait for the day when IO sizing is essentially a thing of the past.

Adbot
ADBOT LOVES YOU

Nomex
Jul 17, 2002

Flame retarded.

cheese-cube posted:

I'm pretty sure that dedupe is turned on but I'd have to check with the storage team. Is there a general best-practice for snapshot capacity requirements based on volume size?

It really depends on your snapshot schedule and volume deltas. 20% is a good starting point generally.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply