Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

Preface: Not calling out Wicaeed or anything, the topic just triggered a rant :)

What is it about storage that it always ends up being the redheaded stepchild that management is eager to skimp on? My old boss wouldn't bat an eye at paying $shitloads for the latest, fastest Intel Xeons with the most cores (even for tasks that weren't CPU limited at all :derp:). 10Gb NIC's when we don't even saturate our 1Gb link? Why not! But god forbid we buy anything but lovely SATA drives and offbrand RAID controllers. Maybe he'd spring for a consumer SSD or two if I was lucky. Is it just that storage is nuanced and hard? The "$50k for 20TB?!?!? I can buy those disks from Best Buy for $500 bucks!" syndrome?

Adbot
ADBOT LOVES YOU

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Docjowles posted:

Preface: Not calling out Wicaeed or anything, the topic just triggered a rant :)

What is it about storage that it always ends up being the redheaded stepchild that management is eager to skimp on? My old boss wouldn't bat an eye at paying $shitloads for the latest, fastest Intel Xeons with the most cores (even for tasks that weren't CPU limited at all :derp:). 10Gb NIC's when we don't even saturate our 1Gb link? Why not! But god forbid we buy anything but lovely SATA drives and offbrand RAID controllers. Maybe he'd spring for a consumer SSD or two if I was lucky. Is it just that storage is nuanced and hard? The "$50k for 20TB?!?!? I can buy those disks from Best Buy for $500 bucks!" syndrome?

Big question and many reasons, but it always seems to boil down to an oversight of management and technicians sizing an environment based on GB or TB instead of the more finite points of IOPS and more technical nitty gritty. Mostly because GB and TB are much easier to grasp since we hear it all the time. IOPS, Seektimes, RAID levels, and such are more difficult to understand how they all work together aside from people who have looked into it.

Dilbert As FUCK fucked around with this message at 20:08 on Aug 20, 2013

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Dilbert As gently caress posted:

Big question and many reasons, but it always seems to boil down to an oversight of management and technicians sizing an environment based on GB or TB instead of the more finite points of IOPS and more technical nitty gritty. Mostly because GB and TB are much easier to grasp since we hear it all the time. IOPS, Seektimes, RAID levels, and such are more difficult to understand how they all work together aside from people who have looked into it.
Plus, people can't do basic math. $6,000 × 10 for a 1/4-rack full of servers sounds like way less money than $50k for a basic SAN.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


A lot of people really just don't recognize storage as having more than 4 features; big/small and fast/slow.

There's a whole log more engineering that goes into a SAN that people don't recognize because to them storage is connecting up a single drive to a computer's motherboard. They don't see the redundancy baked in, the advanced caching algorithms, the fault prediction, the dedup, and replication.

That said, storage manufacturers can get hosed too because their margins are outrageous and you have to do this lovely dance with vendors that would make a car salesman weep to get anywhere near the real price for one of these things.

I found this article on Anandtech interesting the other day.

http://www.anandtech.com/show/7170/impact-disruptive-technologies-professional-storage-market

Things are changing rapidly in the storage market. Insanely cheap compute power and relatively cheap reliable speedy storage in the form of PCIe SSDs threaten to turn the industry on its head in a few years.

Most of the energy SAN manufactures have been expending has been to compensate for all the shortcomings of rotational magnetic disks. Once you start introducing an extensive amount of flash and high powered compute resources, you can start making fast and reliable storage systems out of commodity parts for far less.

Docjowles
Apr 9, 2009

bull3964 posted:

That said, storage manufacturers can get hosed too because their margins are outrageous and you have to do this lovely dance with vendors that would make a car salesman weep to get anywhere near the real price for one of these things.

So true. The amount of margin you can knock off under the right circumstances is ludicrous. "Ok, here's our quote, $100k for your SAN." "Thanks, but did we mention that we're also taking bids from <competitor> on this project?" "Oh golly, wouldn't you know it, there was a typo on that quote. We meant $10k!"

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

bull3964 posted:

http://www.anandtech.com/show/7170/impact-disruptive-technologies-professional-storage-market

Things are changing rapidly in the storage market. Insanely cheap compute power and relatively cheap reliable speedy storage in the form of PCIe SSDs threaten to turn the industry on its head in a few years.

Most of the energy SAN manufactures have been expending has been to compensate for all the shortcomings of rotational magnetic disks. Once you start introducing an extensive amount of flash and high powered compute resources, you can start making fast and reliable storage systems out of commodity parts for far less.

I agree with some of those points and I've made them myself in this thread, but it's important to realize that "SSD makes things fast" doesn't mean that SAN vendors are suddenly irrelevant. Most differentiation that happens now happens due to things other than performance. Every vendor has methods to get you the performance you need, the true differentiators are things like reliability and uptime, firmware maturity and stability, management tools, and baked in features like compression, dedupe, replication, non-disruptive data movement, etc. And SSD will come with it's own challenges that need to be worked around when it comes to reliability and sizing. Random workloads get easier but sequential access is not orders of magnitude faster, so you can still undersize things with SSD. And, of course, once you have the benefits of SSD you need to be a lot more judicious in how you build the front end and the interconnects because you can easily overload them once the disk back end is capable of pushing substantially more IO.

There will still be plenty of engineering problems to be solved, they will just move away from masking the problems with slow disk access and towards things that users are more likely to really notice. Besides, plenty of vendors already use commodity hardware and prices are still what they are. Most vendors are sourcing chips from Intel or AMD and drives from Seagate and Hitachi. They are using fairly standard motherboards. They are buying the same SSD that you can get off the shelf now. The inputs are not what accounts for the majority of the cost of a storage array, it's the massive amount of programmers, QA, support, professional services, and sales people that need to be employed to turn that hardware into a reliable, manageable, productive storage system. Cheap SSD will definitely help bring costs down for some use cases because you won't need to sell a ton more capacity than required just to meet the IO requirements of the application, but the biggest benefit won't be reduced cost, it will be freeing up developer time that would have been spent on tuning performance and spending it on building better management or more features.

That's my take on it, anyway.

evil_bunnY
Apr 2, 2003

Docjowles posted:

Preface: Not calling out Wicaeed or anything, the topic just triggered a rant :)

What is it about storage that it always ends up being the redheaded stepchild that management is eager to skimp on?
"I can buy cheap disks at $STORE, why do you want $50k for this?"

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Green drives they save the environment, also to maximize storage we are going 24 disk raid 0

Thanks Ants
May 21, 2004

#essereFerrari


My neighbors son is good with computers and that's what he said to do

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Caged posted:

My neighbors son is good with computers and that's what he said to do

I wish I didn't hear this is the storage world but it never ceases to amaze me "Oh well bill bob jo's kid went to Phoenix online and said X was good enough, so we know it probably isn't X"

And come to find out all they have in storage is SATA 7.2K 1TB drives, WHY..

Dilbert As FUCK fucked around with this message at 21:53 on Aug 20, 2013

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


NippleFloss posted:


Besides, plenty of vendors already use commodity hardware and prices are still what they are.

I think the takeaway from that article is they can get away with it now due to people inherently trusting storage vendors to keep their data safe because they have overcome the shortcomings of the mechanical disk in the past. In 3-7 years, this may not be the case anymore.

Basically, we won't need the big engineering input because the increased reliability of the storage will let any vendor churn out high (enough) performance solutions to meet the demands of most customers.

At the same time, writing is on the wall for traditional shared storage in quite a few cases. The application technologies are starting to come around to the "shared nothing" way of doing things. Exchange is doing it now with DAGs. Hyper-V 2012 is doing it with shared nothing live migration. Even VMware is dipping their toe in the water in the form of the vSphere Storage Appliance. Yeah, none of these implementations are perfect yet, but the products keep iterating.

The only issue in the past was being able to get enough performance out of local storage for this to be feasible (while at the same time being reliable enough), but PCIe SSDs look like they are going to bring us a long way there.

There will always be an application for SANs, but things are very much in a state of transition right now. It will be interesting to see how things play out.

If nothing else, I will enjoy seeing SAN vendors get knocked down a few pegs and be forced to price their hardware like anyone else. gently caress "call for pricing."

bull3964 fucked around with this message at 22:28 on Aug 20, 2013

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

bull3964 posted:

things are very much in a state of transition right now

This is pretty much IT.txt all the time. There's always new poo poo in the pipeline and it's loving cool.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


skipdogg posted:

This is pretty much IT.txt all the time. There's always new poo poo in the pipeline and it's loving cool.

Oh yeah, absolutely. Every once in awhile I sit back and think that the industry I'm in didn't even exist at all when I was in college 12 years ago.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

bull3964 posted:

I think the takeaway from that article is they can get away with it now due to people inherently trusting storage vendors to keep their data safe because they have overcome the shortcomings of the mechanical disk in the past. In 3-7 years, this may not be the case anymore.

Basically, we won't need the big engineering input because the increased reliability of the storage will let any vendor churn out high (enough) performance solutions to meet the demands of most customers.

At the same time, writing is on the wall for traditional shared storage in quite a few cases. The application technologies are starting to come around to the "shared nothing" way of doing things. Exchange is doing it now with DAGs. Hyper-V 2012 is doing it with shared nothing live migration. Even VMware is dipping their toe in the water in the form of the vSphere Storage Appliance. Yeah, none of these implementations are perfect yet, but the products keep iterating.

The only issue in the past was being able to get enough performance out of local storage for this to be feasible (while at the same time being reliable enough), but PCIe SSDs look like they are going to bring us a long way there.

There will always be an application for SANs, but things are very much in a state of transition right now. It will be interesting to see how things play out.

If nothing else, I will enjoy seeing SAN vendors get knocked down a few pegs and be forced to price their hardware like anyone else. gently caress "call for pricing."

Going from HDD to SSD won't magically make arrays more reliable. You will still need to engineer around issues of disk loss, silent data corruption, catastrophic failure, etc. And the performance improvements will be pretty great but you will still need to size things properly and design arrays that take advantage of the the specific benefits of SSD. As arrays get faster applications will be written to take advantage of those new speeds and you will again begin to hit disk bottlenecks. It's such a young technology that there is no telling what the actual challenges will be 10 years down the road, but the idea that any random person will be able to build an enterprise class array just because of SSD is a pipe dream.

The problem with local storage wasn't really performance, you could easily do whatever you needed with JBOD for cheaper than you could buy a SAN. The problem was that you ended up wasting a ton of space because you had little silos of disk attached to hosts that were inaccessible to other hosts. It's the same issue virtualization is solving on the compute side. Shared storage will certainly begin to look different with things like software defined storage, or scale out compute/storage solutions, or non-shared storage clustering...but you'd better believe that the large SAN vendors are going to get into those markets in a hurry.

And, again, much of the benefit of shared storage isn't necessarily performance or clustering, it's features like space efficient, instant snapshots, thin replication, clones, fast restores, non-disruptive movement and tiering, and the ability to fully utilize capacity and performance due to not having silos of space.

Our Exchange team here has been lobbying for DAS storage ever since they got on 2010 because Microsoft told them it was a good idea to do away with shared storage and backup entirely. But by the time they purchase blades with enough built in storage to handle their requirements for 6 DB copies (2 active copies and 1 lag copy at each site) they have spent as much as shared storage would have cost and they are using way more rack space, generating more heat, using more power, and still have a less reliable and less flexible system. It'll be good for smaller businesses who can't afford a SAN but still want to build a redundant architecture, but I consider the features complementary with shared storage, not a replacement for it, despite what Microsoft wants people to think.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Density isn't really going to be a huge issue with flash though long term. Today, you can buy 10TB that fits into a double height PCIe bay. Sure, it's expensive as hell, but this is young technology.

Reliability is a concern, yes. However, we can't frame this around traditional reliability concerns that we had for mechanical disks either. It doesn't make sense to use traditional RAID levels or interfaces for devices that do not share anything in common to their predecessors.

Yes applications will eventually be written to take advantage of the speed of PCIe SSDs and we will have to find ways around new bottlenecks. This is absolutely the case and even more of a reason why the traditional SAN isn't necessarily the proper fit for these technologies. Why would we want to hamper a PCIe SSD to the latency of iSCSI?

I'm not saying any old person can slap together a white box server and have enterprise class storage. I AM saying that new vendors with commodity priced products combined with new approaches to data storage by application developers is going to put a squeeze on the traditional SAN for many implementations.

I mean, if you merged something like a FusionIO drive into a Dell VRTX chassis, you've basically killed most of the low end SAN market overnight. At that point you could afford to have two of the things and replicate between them for added redundancy.

bull3964 fucked around with this message at 23:51 on Aug 20, 2013

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
You guys talking about dedupe being bad for IOPS sizing are giving bad advice. On some storage systems, deduped data will be BETTER for IOPS, as you can fit more data into cache. Imagine a fleet of VDI images that need to be static images. The base image will probably be based off of a static snapshot, but then months or years of windows patches are applied.

1) each VM has to stored and read without dedupe. Each block can be cached independantly. You cache KB922054 370 times on your array.
2) you dedupe your VMs. Each deduped block only needs to be cached once. You cache KB922054 once for 370 VMs.

This is quite simplified, and does not apply to every vendor, but dedupe can INCREASE performance in some implementations.

bull3964 posted:

Reliability is a concern, yes. However, we can't frame this around traditional reliability concerns that we had for mechanical disks either. It doesn't make sense to use traditional RAID levels or interfaces for devices that do not share anything in common to their predecessors.
This isn't an issue for some apps. Exchange is a great example. If I could buy 1TB of cheap enterprise flash, I would probably just replicate my datastore to a few other flash enabled servers, and one or two disk controllers.

adorai fucked around with this message at 23:54 on Aug 20, 2013

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

I'm thinking about turning dedupe on for a shared folder on my VNXe. The engineers are dumping builds there nightly, and once they move everything to the SAN there will be about 800GB of builds. Am I crazy for thinking we could probably see 60+% savings with dedupe turned on? I doubt the nightly builds of our software change that much at a block level. I could be wrong though.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

adorai posted:

You guys talking about dedupe being bad for IOPS sizing are giving bad advice. On some storage systems, deduped data will be BETTER for IOPS, as you can fit more data into cache. Imagine a fleet of VDI images that need to be static images. The base image will probably be based off of a static snapshot, but then months or years of windows patches are applied.

1) each VM has to stored and read without dedupe. Each block can be cached independantly. You cache KB922054 370 times on your array.
2) you dedupe your VMs. Each deduped block only needs to be cached once. You cache KB922054 once for 370 VMs.

This is quite simplified, and does not apply to every vendor, but dedupe can INCREASE performance in some implementations.

Yes it CAN however in this instance with a 20k budget I don't think he is going to be able to afford a system which can do just that.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Dilbert As gently caress posted:

Yes it CAN however in this instance with a 20k budget I don't think he is going to be able to afford a system which can do just that.
Well that is a moot point because he isn't getting 20TB of anything except readynas for $20k.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

adorai posted:

Well that is a moot point because he isn't getting 20TB of anything except readynas for $20k.

Depends on what KIND of storage, Equal logic's PS4100 or the MD line can hit and exceed 20TB for <20k, so if he is okay with the storage that NL-SAS and 7.2k Drives can output. Now if we are talking 15k/10k SAS/SSD that may be a bit pushing it. For all we know he just needs to hold large, mainly stagnate data.


skipdogg posted:

I'm thinking about turning dedupe on for a shared folder on my VNXe. The engineers are dumping builds there nightly, and once they move everything to the SAN there will be about 800GB of builds. Am I crazy for thinking we could probably see 60+% savings with dedupe turned on? I doubt the nightly builds of our software change that much at a block level. I could be wrong though.


No if the builds are largely the same minus patch code you could easily see 60%.

Dilbert As FUCK fucked around with this message at 00:11 on Aug 21, 2013

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

bull3964 posted:

Density isn't really going to be a huge issue with flash though long term. Today, you can buy 10TB that fits into a double height PCIe bay. Sure, it's expensive as hell, but this is young technology.

There are some technical limitations with the way flash is manufactured that make it hard to predict how cheap/small/dense it will realistically get. But my point was simply that shared storage is good because it is much easier to prevent large amounts of resources sitting out there unused in a shared storage infrastructure than it is when you're using DAS, regardless of whether it's HDD or SSD that you're ultimately storing the data on.

quote:

Reliability is a concern, yes. However, we can't frame this around traditional reliability concerns that we had for mechanical disks either. It doesn't make sense to use traditional RAID levels or interfaces for devices that do not share anything in common to their predecessors.

I'm not sure what you mean here. Storage will still need redundancy built in and that redundancy will come in some form of erasure code algorithm, whether that's something like a traditional raid level, or a network based m+n parity scheme or something else. You can still lose a disk, or a block, or have silent data corruption on an SSD. And even then you will likely still want NVRAM or something sitting in front of the SSD and acting as a write journal. SSD is very fast but NVRAM is faster still and for things like high performance databases the difference can be noticeable.

quote:

Yes applications will eventually be written to take advantage of the speed of PCIe SSDs and we will have to find ways around new bottlenecks. This is absolutely the case and even more of a reason why the traditional SAN isn't necessarily the proper fit for these technologies. Why would we want to hamper a PCIe SSD to the latency of iSCSI?

Layer 2 latency on good switches is in the nanosecond range versus the microsecond range on PCIe SSD. Transport layer latency is completely negligible even on iSCSI, to say nothing of the new FCOE and the new DCB technologies that come with it.

quote:

I'm not saying any old person can slap together a white box server and have enterprise class storage. I AM saying that new vendors with commodity priced products combined with new approaches to data storage by application developers is going to put a squeeze on the traditional SAN for many implementations.

I mean, if you merged something like a FusionIO drive into a Dell VRTX chassis, you've basically killed most of the low end SAN market overnight. At that point you could afford to have two of the things and replicate between them for added redundancy.

Nutanix already does this with scale out compute/storage nodes for vitualization. It's a cool product but it has not yet killed off the low end SAN market.

Dilbert as gently caress posted:

No if the builds are largely the same minus patch code you could easily see 60%.

Doesn't the VNXe only do file level dedupe? In which case, unless the files are exactly the same, the savings will be 0.

YOLOsubmarine fucked around with this message at 04:39 on Aug 21, 2013

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Dammit your right. The VNXe does file level dedupe and compression. I have plenty of space so I'm not even going to worry about it

evil_bunnY
Apr 2, 2003

File level dedupe is only slightly retarded, except maybe for a volume where you put all your root disks.

We're saving a good chunk of space on our user volumes using block dedupe, but that amount of actually identical files is like, 2%

evil_bunnY fucked around with this message at 09:06 on Aug 21, 2013

Amandyke
Nov 27, 2004

A wha?

skipdogg posted:

Dammit your right. The VNXe does file level dedupe and compression. I have plenty of space so I'm not even going to worry about it

Not 100% accurate, here's a whitepaper that describes the dedupe and compression on the VNXe.

http://www.emc.com/collateral/hardware/white-papers/h10579-vnxe-deduplication.pdf

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Amandyke posted:

Not 100% accurate, here's a whitepaper that describes the dedupe and compression on the VNXe.

http://www.emc.com/collateral/hardware/white-papers/h10579-vnxe-deduplication.pdf

Yeah, I read that last night, but I don't think it does pure block level dedupe though. I need to learn more about dedupe in general.

So for the builds folder I was talking about earlier, every night when buildbot kicks off a job it builds a bunch of different builds of our software. Each one is slightly different depending on what customer might get the software. The core software is the same, the only difference in between the build for company A or company C might be a few lines of text and a logo for the UI maybe a driver or two. Then they create factory signed images and non signed images for dev/test. so basically the final product is a 25MB highly compressed .img file, there's about 30 different versions of it. The actual file produced is not compressible at all, these software images run in an embedded linux environment and they save every kb of space possible.

Is there a software program out there that can analyze two files and see how it would work?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Amandyke posted:

Not 100% accurate, here's a whitepaper that describes the dedupe and compression on the VNXe.

http://www.emc.com/collateral/hardware/white-papers/h10579-vnxe-deduplication.pdf

That whitepaper states that deduplication occurs on the file level, meaning only identical files will be deduplicated. Compression occurs within a file as well (as usual) and does not compress data "between" files. So "File level deduplication and compression" seems pretty accurate. And in his case it doesn't seem very useful at all.

skipdogg posted:

Is there a software program out there that can analyze two files and see how it would work?

BeyondCompare will scan a directory for identical files. But really you probably don't need to go that far to know that it won't work. The dedupe job will compute a hash of the file based on it's blocks on disk. If the hash for two files is different it will consider them different files and they won't be candidates for deduplication. Any change to a bit anywhere within the file will cause the file hashes to diverge, so unless someone is literally just copying the same file, without modification, to the file store you won't see any savings. Block or page level dedupe would probably give you much better results if it were available, based on what you've said about the process.

Pile Of Garbage
May 28, 2007



Agrikk posted:

QLogic QLE2460 4GB Fibre Channel HBA. They have Server 2012 support and are cheap as hell on eBay. Will these guys work with a Silkworm switch and behave properly in a 2012/ESXi cluster?

A bit of a late reply (Apologies) but yeah, those should be fine as long as you've loaded the latest firmware and are running the latest drivers (Also with the correct MPIO settings if you are using it).

I'm not fully across WSFC storage requirements in Server 2012 but I remember in Server 2008 R2 you have to ensure that the SAN supports SCSI 3.0 PR (Persistent Reservation) in order to properly present LUNs to the cluster nodes.

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug
We currently have 2 massive MDS 9513s with only 2x 48port blades and a santap blade in each chassis. They are fine, but the massive chassis are way overkill for our environment.

Since the santap is going away, we would like to look at getting these things out of our racks and replacing them with a couple 48port Brocades. All we are doing is basic 8gb FC zoning between SANs and our C7000s.

I know very little about Brocades, and was wondering if anybody could point me in the direction of what models might be an appropriate substitute. It looks like the 6510 is their most popular 48port 1U, but I want to make sure we get into something bulletproof, and we are fine with spending money.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

sanchez posted:

Remember that there is a difference between netapp style dedupe and compression like Nimble too, the latter might be a better fit.

Running a few Nimble CS-240 units. Our VDI environment at once site is about 150 thick provisioned persistent VMs (don't ask me why, I have a fuckhead for a coworker). The datastore they are on within vSphere is 5tb with a little over 1tb free. Looking at the volume from the Nimble, it is showing a 5tb volume with just under 2tb used.

I still need to figure out how Nimble does their "compression math" because it is only reporting 1.45x compression.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

cheese-cube posted:

A bit of a late reply (Apologies) but yeah, those should be fine as long as you've loaded the latest firmware and are running the latest drivers (Also with the correct MPIO settings if you are using it).

I'm not fully across WSFC storage requirements in Server 2012 but I remember in Server 2008 R2 you have to ensure that the SAN supports SCSI 3.0 PR (Persistent Reservation) in order to properly present LUNs to the cluster nodes.

Persistent Reservation! That's what failed on my tests. My old SAN (MSA1000) failed the PR check under 2008 R2. After that I used it as a DAS for a little while and eventually tossed it when I shrank my cabinet footprint. (Two 1500w power supplies driving 28 10k SCSI-2 drives in a 1.7TB volume just seemed a retarded thing to do.)

My plan is to use Server 2012 R2 as my FC storage target, so it looks like these cards are supported on both sides of the link. If I get that working, I'll add in a silkworm switch to experiment with cluster storage groups and failover clustering under 2012.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Moey posted:

Running a few Nimble CS-240 units. Our VDI environment at once site is about 150 thick provisioned persistent VMs (don't ask me why, I have a fuckhead for a coworker). The datastore they are on within vSphere is 5tb with a little over 1tb free. Looking at the volume from the Nimble, it is showing a 5tb volume with just under 2tb used.

I still need to figure out how Nimble does their "compression math" because it is only reporting 1.45x compression.

If you're using lazy zero thick provisioning this is what you would expect. When you create a lazy zeroed VMDK VMFS pre-allocates the blocks to be used in the block allocation map, but it doesn't actually write to them. The Nimble system doesn't have access to the VMFS block allocation map so it only knows that a block is "in use" when it has been written to. So even though you're thick provisioning on the ESX side you're really still thin provisioned on the storage side because only blocks that have been written will be reflected in the used space calculations on the Nimble array.

Amandyke
Nov 27, 2004

A wha?

Linux Nazi posted:

We currently have 2 massive MDS 9513s with only 2x 48port blades and a santap blade in each chassis. They are fine, but the massive chassis are way overkill for our environment.

Since the santap is going away, we would like to look at getting these things out of our racks and replacing them with a couple 48port Brocades. All we are doing is basic 8gb FC zoning between SANs and our C7000s.

I know very little about Brocades, and was wondering if anybody could point me in the direction of what models might be an appropriate substitute. It looks like the 6510 is their most popular 48port 1U, but I want to make sure we get into something bulletproof, and we are fine with spending money.

Why are you moving away from Cisco and to Brocade? Is the rest of the environment Brocade? Do you have other fibre switches?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Could get a 48 port nexus 5k UP and then you have a migration path to FCoE and won't have to deal with learning a new switch platform or fighting potential interop issues.

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

Amandyke posted:

Why are you moving away from Cisco and to Brocade? Is the rest of the environment Brocade? Do you have other fibre switches?

No other switches. We basically have (In each of our two datacenters) a fully outfitted VMAX 20k, recoverpoint, and 6 C7000 blade chassis (though we've consolidated down to 3).

The MDS switches occupy nearly 2 entire racks. It's basically required for our santap (for recoverpoint replication), but as we are moving to vplex we basically just need to zone between our arrays and C7000s and that is it. There's nothing wrong with the MDSs, they are just overkill and consume a lot of unnecessary power / cooling. It would be nice to move to a couple 1U fiber switches.

1000101 posted:

Could get a 48 port nexus 5k UP and then you have a migration path to FCoE and won't have to deal with learning a new switch platform or fighting potential interop issues.

That actually isn't a terrible idea. We have 2x 48port Nexus 5Ks feeding all of our 10GB needs now, and at this point everything is feeding off of HP flexfabrics. I'll look into what this would take, I'm never really looked at FCoE before.

Blame Pyrrhus fucked around with this message at 16:45 on Aug 23, 2013

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Also note that the 5k can do native FC for the vmax as well. Using a vmax 40k with a 5596UP and pretty happy with the results thus far.

AlternateAccount
Apr 25, 2005
FYGM
Would this be the right thread to ask about a backup situation for offsite that doesn't necessarily involve a NAS or SAN?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
We won't bite, what did you have in mind?

AlternateAccount
Apr 25, 2005
FYGM
Eh, I just have the probably common problem of needing offsite backup for an old 2950, but there's no hardware for that purpose in place and their link is too slow to push the data up to an outside destination. Tried copying backup files across to a simple portable hard drive, but given that their little NAS box for local backups is ALSO on USB, it looks like you only get about half speed across, something in the neighborhood of 15MB/s. This leads to copying times of >24 hours.
I'd like to get a tape drive, but anything big enough to store a full backup of their data is >$3000 it looks like.

As far as I can tell, there's no real option between sticking a USB3 card in the server to get the transfer done in a reasonable amount of time or investing thousands in an expensive tape system, is that pretty much where I am at?

AlternateAccount fucked around with this message at 19:23 on Aug 27, 2013

Thanks Ants
May 21, 2004

#essereFerrari


How is the NAS also on USB? That part doesn't make a ton of sense to me.

How much data are you looking to backup? How much changes and how often? How long do you need to keep it for?

Tandberg Data do some SME backup stuff that's pretty cheap, have a look.

Adbot
ADBOT LOVES YOU

AlternateAccount
Apr 25, 2005
FYGM

Caged posted:

How is the NAS also on USB? That part doesn't make a ton of sense to me.

How much data are you looking to backup? How much changes and how often? How long do you need to keep it for?

Tandberg Data do some SME backup stuff that's pretty cheap, have a look.

Ergh, I said NAS and didn't mean it, sorry I've got people calling me with troubles as I type this. It's just this little Buffalo 3TB RAID1 box. It's where their local backups go. Since it gets the benefit of full speed of the USB2, it usually does a full back up in twice the time of copying a backup from it to another USB device. I wouldn't call it speedy, but it usually runs full backups over the weekend, which give it plenty of time.

They've never had a working offsite backup, they were trying to trickle the backups across a 1Mb link to another office, clearly that's not going to work.
The important people want weekly full backups, but we could probably get away with monthly. Total data is ~2TB right now. Looking at a pile of incremental files from this month, they look to range from 2-8GB in size daily.

The Tandberg stuff looks somewhat cheaper(in a five minute look) but right now they're in a bit of a lockdown expense wise, there's no way I am going to be able to pull any hardware that's costing four figures, as silly as that seems. However, I really only need whatever we end up doing to work a bit past the end of the year. I will be able to budget in some proper hardware for NEXT year and get some things working like they should, but for right now the cashflow is a trickle at best. That's why a $50 USB3 card looks appealing, it just feels kind of sloppy to be shuffling 3.5" hard drives back and forth for this purpose.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply