Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vanilla
Feb 24, 2002

Hay guys what's going on in th

Mr Shiny Pants posted:

Sure it fails, everything fails. And I don't know about you but if the SAN goes down we are looking at a day or two of downtime. So no, we have two storage arrays in a active - active configuration in different datacentres. This is for DR but also if the first one fails. Getting a tech onsite to fix our array takes a couple of hours, checking if everything works, and booting the whole infrastructure also takes a couple of hours. This is if they can find the issue right away. Our IBM SAN went down because of a second disk deciding to not fill in as a spare even though the array said it was a good drive. That was a fun night. Took us two days to get it running again, even with IBM support.

Now you can say the support was worth it, and it was but let's not pretend storage arrays don't go down. They do, and usually, spectacularly so.

As for the management backing: If you buy a storage array the cost of it usually involves some management backing otherwise you don't get the funding. So during these talks building it yourself can be discussed (depends on company culture for sure) as well as the risks involved. If both parties feel it is worth it due to costs, flexibility or whatever I don't see a reason why you won't at least look at some solutions. IMHO.


I don't think he meant the VMAX was incapable of failure, everything is....but the VMAX thing is a whole different conversation.

When it comes to management backing sometimes a company has to simply experience the cost of an outage before they will start buying serious gear. Not just loss of revenue but also reputation, brand, etc.

Everyone can work through a performance issue but outages are the one thing that get people sacked at the top of the chain.

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

I had a 3240 controller fail, and then the second one in the HA pair fail a few days later, before the maintenance window to replace the failed part in the first one. It was a bad week.

Much though it pains men to say, a 3240 is not in the same class of reliability as a VMAX or an HDS VSP.

Though with 4 hour parts replacement you should only be running degraded for a day at most, which is what our uptime numbers are based on.

OldPueblo
May 2, 2007

Likes to argue. Wins arguments with ignorant people. Not usually against educated people, just ignorant posters. Bing it.
I think my record for hot-swapping a 3240 motherboard is an hour, not counting delivery times. I'm not saying that's as fast as it should always be, but how long it takes usually depends on external circumstances. For example I'm going to wildly speculate that maybe they didn't want to do a giveback right away but wanted to wait for a set maintenance window to cut back? If so then they left themselves non-redundant intentionally (once again, assuming). Not judging or anything and you don't have to go into it Adorai if you don't want to, just wanting to point out the actual act of replacement itself is generally pretty quick.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

Much though it pains men to say, a 3240 is not in the same class of reliability as a VMAX or an HDS VSP.

Though with 4 hour parts replacement you should only be running degraded for a day at most, which is what our uptime numbers are based on.
Generally we try to get it replaced the same day. Unfortunately, there was some logistics issue which delayed the part getting to us after the failure on sunday. We got the part Monday afternoon, but a second part was shipped on Monday for delivery on Tuesday, so the maintenance was scheduled for Tuesday. We rescheduled for Wednesday because we had a large number of statements that were being generated Tuesday night and there was no opportunity for any downtime, and the second node failed on Wednesday.

Ataraxia
Jun 15, 2001

Champion of nothing.

NippleFloss posted:

It's incredibly hard to build a system out of consumer parts that truly has no SPOF without a significant investment in money and resources. Something like GPFS will do it, but that's not really suitable for general purpose storage use or even cheap and deep backup storage. If your boss came to you and said "please develop a storage system that can provide X number of these types of IOPs, and which has an uptime of Y 9's, and costs significantly less than the enterprise vendors" could you do it? Could you actually prove that it could meet those requirements? Would you stake your job on it?

I'd like to ask, why isn't GPFS suitable in those scenarios? Is it due to licensing or hardware requirements? Maybe I don't get it but shouldn't GPFS be good at things bigger than a single head NAS system?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Ataraxia posted:

I'd like to ask, why isn't GPFS suitable in those scenarios? Is it due to licensing or hardware requirements? Maybe I don't get it but shouldn't GPFS be good at things bigger than a single head NAS system?

It's good at ingesting and churning through large amounts of data quickly, which makes it a good backup or archive target, but it's complicated, expensive, and hardware heavy, so it's not really a fit for a company looking for cheap backup storage.

It's not great for general purpose storage because most applications that run in datacenters are sensitive to small random IO performance, which isn't the strong suit of distributed solutions due to how metadata heavy they are.

The_Groove
Mar 15, 2003

Supersonic compressible convection in the sun
You can put pretty much any hardware behind GPFS as long as it can see it as a block device. I've even made little clusters in virtualbox serving up those virtual "disks" and it all works (slowly). You can put metadata on SSDs or whatever and spread it out among all server nodes and do enough tuning to make it a lot better for small and random I/O patterns, but yeah, GPFS is mostly used in places with a requirement for large-block sequential bandwidth. It can pretty much max out whatever storage is behind it.

I'm guessing licensing costs are going to be THE most prohibitive factor in using GPFS in a cheap/small environment. The test system for our big cluster is 1 manager node, 2 server nodes, and a single Netapp E5400, which is about as small as you could make a cluster while still having some amount of flexibility for taking down a node and having the other take over everything. You could make a single node "cluster" if you wanted, with all the caveats you could imagine. GPFS is also waaaaaay overkill for a backup target, but would totally work.

Ataraxia
Jun 15, 2001

Champion of nothing.
Thanks for your replies guys.

Full disclosure: I went from SME/Desktop support straight to a GPFS house so while I kind of expected your answers it's nice to hear impartial outsiders opinion. If you're inclined I would like to hear your opinions on some of the other distributed system vendors in a similar vein (NetApp, EMC, Nimble, Brocade et al.) I kind of feel like if IBM made the licensing a fair whack cheaper and bundled Native Raid (never going to happen :( ) it could be a lot more popular. And yes the e5400/dcs3700/md3660 whatever you want to call it is a nice bit of kit. There's probably not a lot more I can say without giving away who I work for as the market is apparently so niche..

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Ataraxia posted:

Thanks for your replies guys.

Full disclosure: I went from SME/Desktop support straight to a GPFS house so while I kind of expected your answers it's nice to hear impartial outsiders opinion. If you're inclined I would like to hear your opinions on some of the other distributed system vendors in a similar vein (NetApp, EMC, Nimble, Brocade et al.) I kind of feel like if IBM made the licensing a fair whack cheaper and bundled Native Raid (never going to happen :( ) it could be a lot more popular. And yes the e5400/dcs3700/md3660 whatever you want to call it is a nice bit of kit. There's probably not a lot more I can say without giving away who I work for as the market is apparently so niche..

EMC has a few different products that compete in the scale out array space: Isilon, ExtremeIO and ScaleIO. Of those, only Isilon has enough market presence to make any real determinations about what it's good at, and that seems to be high throughput sequential IO streams. So massive scale archival data, object storage, streaming video, etc. It has proven to be less than stellar at running things like VMWare or OLTP because (like many distributed systems) it is very metadata intensive and the time required to a) query metadata to locate all of the pieces of an IO request and b) assemble those pieces from the various nodes they are located on, incurs enough latency to make it inefficient for random IO where you can't do readahead to mask that latency. That sort of problem is solvable through things like a coherent global cache (like VMAX), but adding the hardware to work around that problem makes things significantly more expensive.

NetApp doesn't really do scale out like that. Clustered ONTAP can scale a namespace, but individual filesystems live on only a single controller, so single workload performance is limited by the controller that owns the volume unless you do striping at the host layer. They have a construct called an infinite volume that stripes IO from a single volume across multiple nodes, but it is meant for pretty limited use cases right now.

I'm not sure what the performance of Nimble's solution is like because they are also still pretty small. One interesting thing they do is provide a special multi-path drive that not only manages paths on the host, but also directs IO requests for specific LBAs to the node that owns that LBA, so there is no back end traffic to retrieve the data from a partner node and no need for global cache. I'm not sure how they do that, though (A round robin assignment of LBAs or blocks of LBAs to each node, perhaps?) and it could cause other issues.

Basically there is no perfect solution. All of these arrays are build to be good at one or several things and the design decisions required to meet those goals involve trade-offs that make them less good at other things. Which is why it's important to pick a vendor based on what you actually want to do, and not based on synthetic benchmarks or innovative features that won't help with your workload.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

NippleFloss posted:

EMC has a few different products that compete in the scale out array space: Isilon, ExtremeIO and ScaleIO. Of those, only Isilon has enough market presence to make any real determinations about what it's good at, and that seems to be high throughput sequential IO streams.

Slight correction, but XtremIO is not really designed for scaling out like ScaleIO or Isolon, imo. In fact, expansion is disruptive currently. It's also seeing pretty good adoption in the VDI space.

Wicaeed
Feb 8, 2005

NippleFloss posted:

EMC has a few different products that compete in the scale out array space: Isilon, ExtremeIO and ScaleIO. Of those, only Isilon has enough market presence to make any real determinations about what it's good at, and that seems to be high throughput sequential IO streams. So massive scale archival data, object storage, streaming video, etc. It has proven to be less than stellar at running things like VMWare or OLTP because (like many distributed systems) it is very metadata intensive and the time required to a) query metadata to locate all of the pieces of an IO request and b) assemble those pieces from the various nodes they are located on, incurs enough latency to make it inefficient for random IO where you can't do readahead to mask that latency. That sort of problem is solvable through things like a coherent global cache (like VMAX), but adding the hardware to work around that problem makes things significantly more expensive.

NetApp doesn't really do scale out like that. Clustered ONTAP can scale a namespace, but individual filesystems live on only a single controller, so single workload performance is limited by the controller that owns the volume unless you do striping at the host layer. They have a construct called an infinite volume that stripes IO from a single volume across multiple nodes, but it is meant for pretty limited use cases right now.

I'm not sure what the performance of Nimble's solution is like because they are also still pretty small. One interesting thing they do is provide a special multi-path drive that not only manages paths on the host, but also directs IO requests for specific LBAs to the node that owns that LBA, so there is no back end traffic to retrieve the data from a partner node and no need for global cache. I'm not sure how they do that, though (A round robin assignment of LBAs or blocks of LBAs to each node, perhaps?) and it could cause other issues.

Basically there is no perfect solution. All of these arrays are build to be good at one or several things and the design decisions required to meet those goals involve trade-offs that make them less good at other things. Which is why it's important to pick a vendor based on what you actually want to do, and not based on synthetic benchmarks or innovative features that won't help with your workload.

Sounds like you know a bit about ScaleIO :)

Do things like FusionIO cards or the addition of the EMC XtremCache product make any sort of difference in regards to random IO performance?

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
I have an ancient EMC AX4-5i with failed disks. does anyone know if I can use off the shelf drives for replacement? I'm waiting on a quote from Dell on a replacement drive as it's out of warranty.

I'm not worried about longevity of this array, I just need to keep it running until we can finish moving all of the data off it.

:edit: I'm looking at putting Seagate ST3450857SS drives in it, the drives in the unit currently are ST345085 CLAR450

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
Probably not. At least get a same size/speed drive if you're going to try.

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
Mmm, still waiting on a quote from dell but might order some cheetahs.

If I just empty this raid group, I can then just delete the raid group and designate the drives freed up by doing so as hot spares can't I?

ragzilla
Sep 9, 2005
don't ask me, i only work here


NippleFloss posted:

I'm not sure what the performance of Nimble's solution is like because they are also still pretty small. One interesting thing they do is provide a special multi-path drive that not only manages paths on the host, but also directs IO requests for specific LBAs to the node that owns that LBA, so there is no back end traffic to retrieve the data from a partner node and no need for global cache. I'm not sure how they do that, though (A round robin assignment of LBAs or blocks of LBAs to each node, perhaps?) and it could cause other issues.

The Nimble CIM/PSP download a map of block to node mappings and use that to direct requests to the correct node. If you're not running the provided CIM/PSP it does iSCSI redirections on the backend.

evil_bunnY
Apr 2, 2003

IIRC eqallogic works pretty much the same way.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

theperminator posted:

Mmm, still waiting on a quote from dell but might order some cheetahs.

If I just empty this raid group, I can then just delete the raid group and designate the drives freed up by doing so as hot spares can't I?

yes

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
Cool beans, I'll whip my lackeys to move everything off post haste!

Thanks for your help, this EMC is an ancient relic and I normally deal with our EQLs.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
I should disclaim that I've never worked with the AX series, but that's how it would work on other Clariions.

Amandyke
Nov 27, 2004

A wha?

theperminator posted:

Mmm, still waiting on a quote from dell but might order some cheetahs.

If I just empty this raid group, I can then just delete the raid group and designate the drives freed up by doing so as hot spares can't I?

I would just remove them from the array entirely. Marking them as a hot spare would get you into all sorts of trouble if another drive failed and it tried to use an already bad drive as a hot spare. Just unseat them.

orange sky
May 7, 2007

So, I've just been told by someone on my team we'll be working on Avamar tomorrow, on a client's premises. I want to be prepared, what do you suggest I watch/download/read? Preferably something free, since I'm a poor bastard. I've watched the proven professionals video on youtube, but it's too short to really learn anything. Something that skims functionalities and how to work with those functionalities would be nice.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

orange sky posted:

So, I've just been told by someone on my team we'll be working on Avamar tomorrow, on a client's premises. I want to be prepared, what do you suggest I watch/download/read? Preferably something free, since I'm a poor bastard. I've watched the proven professionals video on youtube, but it's too short to really learn anything. Something that skims functionalities and how to work with those functionalities would be nice.

If you want to play around with the avamar appliance just download the VDP from vmware.

http://www.vmware.com/products/vsphere-data-protection-advanced At it's core is's essentially an avamar virtual appliance.

Pantology
Jan 16, 2006

Dinosaur Gum

orange sky posted:

So, I've just been told by someone on my team we'll be working on Avamar tomorrow, on a client's premises. I want to be prepared, what do you suggest I watch/download/read? Preferably something free, since I'm a poor bastard. I've watched the proven professionals video on youtube, but it's too short to really learn anything. Something that skims functionalities and how to work with those functionalities would be nice.

Are you an EMC partner? If so, give the vLabs a look: https://emc.demoemc.com/portal

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Anyone here have decent Nexenta's Storage accelerator?

Can't Say I have as much as I like, implementing a setup this weekend to speed up a NetApp FAS 2204(?).

Host is 384GB, 2x16 core, 4x400GB SSD setup so I think stuff should zip after this.

Zephirus
May 18, 2004

BRRRR......CHK

Dilbert As gently caress posted:

Anyone here have decent Nexenta's Storage accelerator?

Can't Say I have as much as I like, implementing a setup this weekend to speed up a NetApp FAS 2204(?).

Host is 384GB, 2x16 core, 4x400GB SSD setup so I think stuff should zip after this.

It depends what you're trying to achieve. If you've got lots of heavily utilised small files that your Netapp is struggling to serve, or you have heavy sequential IO requirements (like media transcoding etc)that aren't time critical, then they can be a really good thing to spice up slower storage

At some point however you are going to still exhaust the capabilities of the underlying storage, as long as the data is still on the netapp, you're tied to it's performance still in many ways.

I've used Avere, Sansymphony and IBM's SVC and they all work pretty much as advertised with the above caveats.

stop, or my mom will post
Mar 13, 2005

Dilbert As gently caress posted:

If you want to play around with the avamar appliance just download the VDP from vmware.

http://www.vmware.com/products/vsphere-data-protection-advanced At it's core is's essentially an avamar virtual appliance.

VDP-A is based on Avamar but it's not a good learning ground to get to know Avamar. VLAB if orange sky is an EMC partner is the best bet.

orange sky, if you have a support.emc.com account login and download documentation, otherwise once on-site open a web browser and enter the hostname of the Avamar system, at a minimum browse to documentation and grab the base Administration guide and then any doc's relevant to the environment you're working in.

Most of the administration guides are available online in one form or another, google for "avamar administration guide 6" "avamar administration guide 7.0" -- it's unlikely you'll run into older versions.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Zephirus posted:

It depends what you're trying to achieve. If you've got lots of heavily utilised small files that your Netapp is struggling to serve, or you have heavy sequential IO requirements (like media transcoding etc)that aren't time critical, then they can be a really good thing to spice up slower storage

At some point however you are going to still exhaust the capabilities of the underlying storage, as long as the data is still on the netapp, you're tied to it's performance still in many ways.

I've used Avere, Sansymphony and IBM's SVC and they all work pretty much as advertised with the above caveats.

Nope this is for a lab VMware and security courses, it's a crutch until we can get a CS240 from nimble. Because YAY PROCUREMENT PROCESSES.

I realize the bottleneck is going to be whatever the backend nettapp can take but a frontended L2ARC is nice.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Presented without comment:

http://www.reddit.com/r/vmware/comments/26zbkb/my_vsan_nightmare/

Simpleboo
Oct 19, 2013

Hey all quick question:

I am setting up a file server for the business I work for, and we are purchasing a Windows Server 2012 R2 Standard edition OS. Will I also need to purchase CALs so users can access it? I've been doing a little research but the terminology is confusing me.

Docjowles
Apr 9, 2009


Cheap-rear end entry level RAID controller leads to poor storage performance, film at 11. I don't really see a VSAN-specific problem here, other than VMware's HCL being a bit "optimistic" ;). My main takeaway is that you should always test common failure modes in a controlled environment to see what will happen (such as how badly performance degrades during a rebuild) before you throw new tech into production.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Docjowles posted:

Cheap-rear end entry level RAID controller leads to poor storage performance, film at 11. I don't really see a VSAN-specific problem here, other than VMware's HCL being a bit "optimistic" ;). My main takeaway is that you should always test common failure modes in a controlled environment to see what will happen (such as how badly performance degrades during a rebuild) before you throw new tech into production.

This is exactly what I was thinking. They pushed their crap hardware too hard.

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!

It's almost as if storage needs to be architected regardless of what solution you use. lovely HBAs on those servers are lovely, gotcha, they are equivalent performance to M1015s with stock firmware, which is barely adequate for home labs.

Moey posted:

This is exactly what I was thinking. They pushed their crap hardware too hard.

It's honestly the HCL's fault for not taking rebuilds into account for performance. Early adopters will almost always get bitten by details like these, at least they didn't lose data.

deimos fucked around with this message at 21:36 on Jun 4, 2014

AlternateAccount
Apr 25, 2005
FYGM

deimos posted:

It's honestly the HCL's fault for not taking rebuilds into account for performance. Early adopters will almost always get bitten by details like these, at least they didn't lose data.

Agree. Maybe it's out of ignorance, but I would honestly expect ANY hardware I pull off of the HCL to handle any task gracefully and with acceptable speed in a deployment of the very modest scope shown in the example.

Kinda blaming VMWare here.

Aquila
Jan 24, 2003

In a major change of direction my Hitachi HUS 150 did something just as advertised: online firmware upgrade. We really didn't notice at all at the server or application layer. Really quite slick.

Also we upgraded firmware on our brocade 6510's with them online. Not like failing one path, like totally online. Black magic is all we can figure on that one.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Docjowles posted:

Cheap-rear end entry level RAID controller leads to poor storage performance, film at 11. I don't really see a VSAN-specific problem here, other than VMware's HCL being a bit "optimistic" ;). My main takeaway is that you should always test common failure modes in a controlled environment to see what will happen (such as how badly performance degrades during a rebuild) before you throw new tech into production.

The problem is the mix between "mandatory minimum progress" and "slow controller". If they're not scaling their minimum progress based on the load/performance of the controller (they weren't), VMware is setting you up to fail.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

AlternateAccount posted:

Agree. Maybe it's out of ignorance, but I would honestly expect ANY hardware I pull off of the HCL to handle any task gracefully and with acceptable speed in a deployment of the very modest scope shown in the example.

Kinda blaming VMWare here.

Ding! It's on the HCL. The appeal of VSAN is "turn your existing environment into your shared storage just by throwing some disks into your ESX hosts!" Once it becomes a complex sizing exercise and not an easy add on to an existing environment it loses a lot of it's appeal to the SMB market, and loses a lot of it's ease of use and simplicity relative to running a dedicated array. You will end up paying more for enterprise quality hardware to run your VSAN and wipe out many of the cost advantages.

This also ignores that VMWare did, effectively, nothing to remediate the issue. A 12 hour outage where the vendor says "just wait for it to clear up" isn't acceptable. The RCA was basically "we assumed more throughput would be available when we wrote our software, whoops!" With dedicated storage hardware you can assume that because you know exactly what's in the box. With SDS you can't, so you'd better have a drat good HCL and QA, and those things take a while to reach maturity and are hard enough to maintain for dedicated storage vendors.

The queue depth issue provides a convenient scapegoat, but this sort of node rebalance activity is actually really hard to do well even on dedicated hardware. Isilon can experience pretty severe performance issues during a rebuild after a node failure, and will actually not do it automatically to avoid this sort of scenario.

As far as the customer being at fault for not testing this...that's a pipe dream. Unless you have a test environment that mimics your production environment in size and scope you can't test for effects that only take place at a certain scale (like rebuilding 100 VMs instead of 5, and under normal production load). Realistic budgets, especially those of small companies who do things like buy low end raid cards and try to get by without dedicated storage by throwing drives in their existent ESXi hosts and running VSAN, don't allow for that size test environment, or the manpower to run it and generate realistic load on it for testing. At the end of the day you're usually taking it on faith and hoping that your vendor didn't gently caress something up when they built it, just like a test drive won't tell you whether a car will crap out it's transmission at 10k miles.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


On the subject of VSAN, I wonder if they are going to let people start using SSDs for actual storage rather than just cache. The new intel PCIe SSDs were announced the other day and those seem perfect.

Two 400gb DC P3600s in a 3 host cluster would be screaming. That would give you 1.2tb of insane fast storage for around $4500. Not a ton of raw storage to be sure, but if you had a handful of VMs, you could get 3 lower end dell servers like the R420 an put together a fast as poo poo 3U SMB cluster for very little money or complexity. You could probably even get away with DC P3500s if your write load wasn't too high and save another $1200.

It actually wouldn't be a bad compliment to a Dell VRTX since it has 8 PCIe bays internally. Setup a VSAN with PCIe SSDs across the blades you have installed (assign two PCIe slots to each) for OS volumes and then use the shared PERC8 mechanical drive backplane for bulk storage.

Wicaeed
Feb 8, 2005

bull3964 posted:

On the subject of VSAN, I wonder if they are going to let people start using SSDs for actual storage rather than just cache. The new intel PCIe SSDs were announced the other day and those seem perfect.

Two 400gb DC P3600s in a 3 host cluster would be screaming. That would give you 1.2tb of insane fast storage for around $4500. Not a ton of raw storage to be sure, but if you had a handful of VMs, you could get 3 lower end dell servers like the R420 an put together a fast as poo poo 3U SMB cluster for very little money or complexity. You could probably even get away with DC P3500s if your write load wasn't too high and save another $1200.

It actually wouldn't be a bad compliment to a Dell VRTX since it has 8 PCIe bays internally. Setup a VSAN with PCIe SSDs across the blades you have installed (assign two PCIe slots to each) for OS volumes and then use the shared PERC8 mechanical drive backplane for bulk storage.

Curiously, EMC ScaleIO is going to start using some form of local memory as the cache for it's storage in it's latest release, allowing you to use local SSD's as a faster tier of storage.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Wicaeed posted:

Curiously, EMC ScaleIO is going to start using some form of local memory as the cache for it's storage in it's latest release, allowing you to use local SSD's as a faster tier of storage.

Memory as cache is pretty common in storage, including SDS. All the data is already being read into memory to do things like compression, dedupe, parity, or just normal read activity, so why not just leave it there for a while in case someone wants to read it again?

Atlantis USX can use memory as a persistent storage tier, which is pretty unique though.

Adbot
ADBOT LOVES YOU

Bitch Stewie
Dec 17, 2011

Aquila posted:

In a major change of direction my Hitachi HUS 150 did something just as advertised: online firmware upgrade. We really didn't notice at all at the server or application layer. Really quite slick.

Also we upgraded firmware on our brocade 6510's with them online. Not like failing one path, like totally online. Black magic is all we can figure on that one.

We're probably about to go for a HUS 110.

Feedback so far is that they're dull boring fuckers with a slightly nasty management interface but they just work with no real fuss - any feedback would be great thanks :)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply