Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
namaste friends
Sep 18, 2004

by Smythe

adorai posted:

Oracle specifically supports it. I cannot think of a single other relational database that does, but if you know of one, please enlighten me.

I believe DB2, MaxDB and Sybase support NFS. That said, my point of contention is that it is not impossible, nor is it unheard of for customers to choose NFS over FC and iscsi as a storage protocol. Please understand I'm not trying to tell you that one should choose NFS over FC or iscsi. All I'm saying is that one shouldn't rule NFS out as a 'hack' for reasons which I've previously stated in this thread. I think it all depends on the customer's requirements and I don't presume to understand individual clients' decision making processes other than I can only assume that they are making competent decisions based upon clear requirements as dictated by their business.

Adbot
ADBOT LOVES YOU

namaste friends
Sep 18, 2004

by Smythe

madsushi posted:

With RAID-DP, you always want the biggest aggregates / raid groups you can get, as it saves you from wasting drives to new raid sets. Every aggregate means a new raid set, which means 2 disks lost to the dual parity drives. Ideally you'll split the drives evenly between your controllers and make the biggest aggregates you can, making sure to maximize your "raid group" size to minimize lost disks. More disks in an aggregate = more spindles your data is spread across = better performance.

Assuming you get ONTAP 8.0.1 on the FAS (which I am 99% sure you will, I think it's the only supported ONTAP for the 62xx series) you can make 64-bit aggregates, so you can toss as many disks as you want into a single aggregate (per controller).

Be careful about modifying your raid group sizes. If you make them too big, it will take an eternity for your raid group to rebuild after a disk failure. I woudn't recommend changing them at all unless you had a very good reason for doing so (ie you had no choice).

The unfortunate problem with RAID6 (or DP) is that you "waste" a lot of disk for the sake of resiliency.

I agree with you, 64-bit aggregates are the way to go.

Nomex, 1 aggregate is fine.

namaste friends
Sep 18, 2004

by Smythe

Nomex posted:

Would you put multiple workloads in 1 aggregate to maximize the amount of disks globally? Should I only have 2 aggregates total? If so, that makes my life a lot easier.

Before 64 bit aggregates rolled up, the main limitation was the 16 TB aggregate maximum which would play a major factor in your planning. However, now that this limit is no longer a problem, your main concern should be whether or not you think you'll ever need to perform aggregate level snapshots/restores. I've never seen anyone perform an aggregate level snaprestore however I have heard that it has saved the skin (and thus careers) of some people. It all comes down to how much money you have for disk.

Performance/spindles aren't so much of a design consideration anymore, now that disks are massive, which is why NetApp now sells FlashCache (aka PAM II) cards.

For example, if you need 50 TB raw and you wanted to use 2 TB SATA drives, you wouldn't get very good performance/spindle compared to 50 TB worth of SAS drives. However if you stuck some FlashCache in front of your SATA array, you'd probably obtain comparable performance, depending on your workload.

namaste friends
Sep 18, 2004

by Smythe

j3rkstore posted:

Is there any downside to buying off-lease NetApp shelves on ebay? Some of the retailers have warranty options so I'd be looking at those.

I recently got a quote for a shelf which totals more than I paid for the filer, disks, and software. Now the rep is telling me drive prices are going up 10-15% which I think is :psyduck:

It depends on what you plan on storing on them. If it's just your warez :unsmith: collection, go for it! If it's a snapmirror/snapvault secondary for corporate windows workgroup data, I'd say you're probably ok as well. If it's your CRM or corporate email, I'd say it's a bad idea. You'll be protected from low level disk errors provided you keep enough spares around for the aggregates.

namaste friends
Sep 18, 2004

by Smythe
Question for the folks who like using RDMs. What sort of performance problems have you experienced? Are you still using RDMs or not?

namaste friends
Sep 18, 2004

by Smythe

Misogynist posted:

This question is making me incredibly confused. What are you even getting at?

A couple posts back someone mentioned seeing performance problems with RDMs. My question is pretty much the same as yours except I'm not trying to be a dick about it.

namaste friends
Sep 18, 2004

by Smythe

madsushi posted:

I disagree with this.

Inflating your dedupe ratio by stacking only OS drives into one volume is bad for your overall dedupe amount. You get the BEST dedupe results (total number of GBs saved) by stacking as MUCH data into a single volume as possible. The ideal design would be a single, huge volume with all of your data in it with dedupe on.

Also, re: slowing down by slipping VMFS in the middle, this is wrong, because there is no VMFS on an NFS share. You're better off using iSCSI with SnapDrive to your NetApp LUNs, rather than doing RDM.

For misogynist, from page 67.

namaste friends fucked around with this message at 20:19 on Mar 19, 2012

Adbot
ADBOT LOVES YOU

namaste friends
Sep 18, 2004

by Smythe
The first thing to check if your Snapmanger for <some windows product> is failing, is if one of your volumes has run out of space or you're about to run out of space.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply