Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
psydude
Apr 1, 2008

MC Fruit Stripe posted:

I can buy a 6tb external for like $160 from Costco any day of the week, what's EMC trying to pull?

I do wonder if we're reaching the point where it's almost becoming cheaper and easier to buy a ton of consumer-grade storage and run it in a SAN setup, with appropriate redundancies. Kind of like how a lot of HPC clusters just throw consumer graphics cards at the problem because it's so much cheaper than the alternatives.

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


Scale out and vSAN is sort of getting us there. gently caress all that dual-ported SAS expense and hefty RAID controllers when most applications can be serviced from a mix of SSD and SATA with software handling the parity and caching.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

psydude posted:

I do wonder if we're reaching the point where it's almost becoming cheaper and easier to buy a ton of consumer-grade storage and run it in a SAN setup, with appropriate redundancies. Kind of like how a lot of HPC clusters just throw consumer graphics cards at the problem because it's so much cheaper than the alternatives.

The issue is that there is no way to provide appropriate redundancies at a software level given the sophistication most customers have. Software defined storage is getting closer, but it's not there yet. And when it is you'll still pay a bunch for licensing support because the software is the the expensive thing in SANs.

There is stuff like VSAN or Datrium or ScaleIO that are getting closer to allowing commodity economics for hardware, but they aren't widely deployed or cheap. On the other hand applications can be built to expect unreliable storage, but that's not a universal solution.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Thanks Ants posted:

Scale out and vSAN is sort of getting us there. gently caress all that dual-ported SAS expense and hefty RAID controllers when most applications can be serviced from a mix of SSD and SATA with software handling the parity and caching.

<cluster explodes for two straight days when VSAN node is rebuilt>

Thanks Ants
May 21, 2004

#essereFerrari


NippleFloss posted:

On the other hand applications can be built to expect unreliable storage, but that's not a universal solution.

I think this has got to be the way to go. If you're developing for :yaycloud: then you already handle nodes just disappearing, kick it out the cluster and bring up another one, so outside of crusty old legacy stuff that has to run as a single server instance and needs something like fault tolerance to maintain uptime, everything else will just run off local storage and deal with it if there's a failure.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
from now on any SAN outage should be referred to as a sandstorm

Lord Dudeguy
Sep 17, 2006
[Insert good English here]

go3 posted:

from now on any SAN outage should be referred to as a sandstorm

by Dellrude.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


For the love of god, any of you playing around with dcos on public servers need to secure your marathon instance.

https://www.shodan.io/search?query=X-Marathon+

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

psydude posted:

I do wonder if we're reaching the point where it's almost becoming cheaper and easier to buy a ton of consumer-grade storage and run it in a SAN setup, with appropriate redundancies. Kind of like how a lot of HPC clusters just throw consumer graphics cards at the problem because it's so much cheaper than the alternatives.
I doubt this type of storage will ever come on-premises in any way more
meaningful than Gluster/Ceph, but Amazon's going to be killing it in the cloud if they can manage to pull off the EFS technical preview.

Arsten
Feb 18, 2003

SaltLick posted:

Are we sure it was natural?

It wasn't natural, but it was a different sort of death.


But I'm sure none of you want a short story about hijinks leading to a death. This isn't the Games forum, after all.

Edit: Just kidding. I won't leave you hanging.

Apparently, he went bungie jumping at some resort in Brazil or some other South American tourist trap and neither him nor the guy running the jump thought it was necessary to check that his line was attached. He was severely broken, but was gone by the time they got him to a hospital.

Arsten fucked around with this message at 22:42 on May 23, 2016

George H.W. Cunt
Oct 6, 2010





Whoops

SSH IT ZOMBIE
Apr 19, 2003
No more blinkies! Yay!
College Slice
Is VSAN different than VPLEX? I've sorta been poking the storage engineers, VPLEX seems nice for data migration projects between data centers.

stubblyhead
Sep 13, 2007

That is treason, Johnny!

Fun Shoe

psydude posted:

I do wonder if we're reaching the point where it's almost becoming cheaper and easier to buy a ton of consumer-grade storage and run it in a SAN setup

So what you're saying is that CE was a misunderstood pioneer.

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof

go3 posted:

from now on any SAN outage should be referred to as a sandstorm

Lord Dudeguy posted:

by Dellrude.

I loving love this forum.
I've had a lovely week so far(yeah I know it's Monday.) and this made my day.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Thanks Ants posted:

I think this has got to be the way to go. If you're developing for :yaycloud: then you already handle nodes just disappearing, kick it out the cluster and bring up another one, so outside of crusty old legacy stuff that has to run as a single server instance and needs something like fault tolerance to maintain uptime, everything else will just run off local storage and deal with it if there's a failure.

It's actually pretty hard to do this and some application types don't lend themselves to this sort of distributed work. Some things are trending that way, but it's a slow transition. Storage arrays are getting cheaper, faster, and easier all the time, so the benefits of commodity are lessened.

Things like caching and failure behavior can be trickier to build around in a distributed environment. Node rebuilds on things like Isilon or VSAN or Nutanix can have a pretty significant impact, both from the rebuild activity and the loss of cached data.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

The issue is that there is no way to provide appropriate redundancies at a software level given the sophistication most customers have. Software defined storage is getting closer, but it's not there yet. And when it is you'll still pay a bunch for licensing support because the software is the the expensive thing in SANs.

There is stuff like VSAN or Datrium or ScaleIO that are getting closer to allowing commodity economics for hardware, but they aren't widely deployed or cheap. On the other hand applications can be built to expect unreliable storage, but that's not a universal solution.
You can really reduce your need for a real SAN though. Use your (expensive) SAN for OS, and pass two or more RDM disks into each guest for any application data. The RDM disks come from cheap as gently caress roll your own storage, and then use OS level software raid1 inside the guest. I can build a SmartOS node running raid0 on flash that will loving own most SANs for dirt cheap, but has zero redundancy. If I use my application to provide some redundancy, I can really do some neat things.

edit: I don't do this because I am the only person on my team that could support it

Methanar
Sep 26, 2013

by the sex ghost

adorai posted:

You can really reduce your need for a real SAN though. Use your (expensive) SAN for OS, and pass two or more RDM disks into each guest for any application data. The RDM disks come from cheap as gently caress roll your own storage, and then use OS level software raid1 inside the guest. I can build a SmartOS node running raid0 on flash that will loving own most SANs for dirt cheap, but has zero redundancy. If I use my application to provide some redundancy, I can really do some neat things.

Could you go more in depth about this.

I'm genuinely interested.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Sure. I can build a SmartOS server that holds 8 SSDs and 16GB of ECC RAM for roughly $2000. This $2000 "server" will have no redundancy, but will be able to provide extremely high IO performance for both sequential and random IO. Throughput will also be good but not great, as my hypothetical server only has 2 gigabit NICs instead of 10GBe. It will probably be able to provide more IOPS than a $30k Nimble array. I can pass the storage from my SmartOS server into VMware in a number of ways, but for the sake of this example, we'll assume iSCSI. If I created a VMware datastore on it, my VM would perform well, until I lost a drive or the server itself died. It's cheaper to build a second server than it is to make my original one HA, so I do that. Unfortunately, there is no way for VMware to mirror a VMFS datastore, so I still don't have a good solution. Instead, I could leverage my expensive HA SAN to host the VM itself, and then just use my SmartOS servers for the application data that I store on my E: drive. I can use a Raw Disk Mapping in VMware to pass an iSCSI LUN directly into a guest OS to store this guest application data on. Windows, linux, and most other operating systems provide some kind of in OS RAID solution, so if I pass an RDM from each storage server into the guest, and then RAID1 them, if either one goes down the guest is still accessing the other one. Instead of a $30k Nimble array, I can use a $10k HA SAN to host the VMware guest and two $2k storage servers to host the app data. My all in cost is $14k, a significant savings.

I have no throats to choke if something goes wrong, and when I get by a bus or something the business is in a bad place, but it can be done. Inexpensively. Or cheap, depends on who you ask.

Alfajor
Jun 10, 2005

The delicious snack cake.
yeah... but you could end up having to do all that at around 2am.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


adorai posted:

Sure. I can build a SmartOS server that holds 8 SSDs and 16GB of ECC RAM for roughly $2000. This $2000 "server" will have no redundancy, but will be able to provide extremely high IO performance for both sequential and random IO. Throughput will also be good but not great, as my hypothetical server only has 2 gigabit NICs instead of 10GBe. It will probably be able to provide more IOPS than a $30k Nimble array. I can pass the storage from my SmartOS server into VMware in a number of ways, but for the sake of this example, we'll assume iSCSI. If I created a VMware datastore on it, my VM would perform well, until I lost a drive or the server itself died. It's cheaper to build a second server than it is to make my original one HA, so I do that. Unfortunately, there is no way for VMware to mirror a VMFS datastore, so I still don't have a good solution. Instead, I could leverage my expensive HA SAN to host the VM itself, and then just use my SmartOS servers for the application data that I store on my E: drive. I can use a Raw Disk Mapping in VMware to pass an iSCSI LUN directly into a guest OS to store this guest application data on. Windows, linux, and most other operating systems provide some kind of in OS RAID solution, so if I pass an RDM from each storage server into the guest, and then RAID1 them, if either one goes down the guest is still accessing the other one. Instead of a $30k Nimble array, I can use a $10k HA SAN to host the VMware guest and two $2k storage servers to host the app data. My all in cost is $14k, a significant savings.

I have no throats to choke if something goes wrong, and when I get by a bus or something the business is in a bad place, but it can be done. Inexpensively. Or cheap, depends on who you ask.

I'm getting lost here, may you go into a little more detail?

Methanar
Sep 26, 2013

by the sex ghost
My understanding is that you have your expensive SAN for your boot drive, essentially, and you have all the actual data on his SmartOS storage. Every VM will be presented a LUN from at least two different hardware SmartOS storage providers. These two independent LUNs are software raided together to provide hardware failure resiliency.

Methanar fucked around with this message at 06:09 on May 24, 2016

LochNessMonster
Feb 3, 2005

I need about three fitty


Vulture Culture posted:

I too intentionally write job descriptions that disproportionately discourage women from applying

Not trying to defend these kind of shennanigans, but the hiring manager was a woman and so is 35-40% of the rest of my department.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


Methanar posted:

My understanding is that you have your expensive SAN for your boot drive, essentially, and you have all the actual data on his SmartOS storage. Every VM will be presented a LUN from at least two different hardware SmartOS storage providers. These two independent LUNs are software raided together to provide hardware failure resiliency.



That's kind of how I'm viewing it but I too also draw high-level diagrams in MS Paint. :)

DigitalMocking
Jun 8, 2010

Wine is constant proof that God loves us and loves to see us happy.
Benjamin Franklin

adorai posted:

Sure. I can build a SmartOS server that holds 8 SSDs and 16GB of ECC RAM for roughly $2000. This $2000 "server" will have no redundancy, but will be able to provide extremely high IO performance for both sequential and random IO. Throughput will also be good but not great, as my hypothetical server only has 2 gigabit NICs instead of 10GBe. It will probably be able to provide more IOPS than a $30k Nimble array. I can pass the storage from my SmartOS server into VMware in a number of ways, but for the sake of this example, we'll assume iSCSI. If I created a VMware datastore on it, my VM would perform well, until I lost a drive or the server itself died. It's cheaper to build a second server than it is to make my original one HA, so I do that. Unfortunately, there is no way for VMware to mirror a VMFS datastore, so I still don't have a good solution. Instead, I could leverage my expensive HA SAN to host the VM itself, and then just use my SmartOS servers for the application data that I store on my E: drive. I can use a Raw Disk Mapping in VMware to pass an iSCSI LUN directly into a guest OS to store this guest application data on. Windows, linux, and most other operating systems provide some kind of in OS RAID solution, so if I pass an RDM from each storage server into the guest, and then RAID1 them, if either one goes down the guest is still accessing the other one. Instead of a $30k Nimble array, I can use a $10k HA SAN to host the VMware guest and two $2k storage servers to host the app data. My all in cost is $14k, a significant savings.

I have no throats to choke if something goes wrong, and when I get by a bus or something the business is in a bad place, but it can be done. Inexpensively. Or cheap, depends on who you ask.

This post is so stupid I think I have cancer now.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

LochNessMonster posted:

Not trying to defend these kind of shennanigans, but the hiring manager was a woman and so is 35-40% of the rest of my department.
Very interesting; maybe things are changing. Do you get a lot of direct applicants or do most of your coworkers come in through recruitment channels?

Collateral Damage
Jun 13, 2009

Tab8715 posted:

That's kind of how I'm viewing it but I too also draw high-level diagrams in MS Paint. :)
Up your game. http://www.visguy.com/2011/08/16/crayon-visio-network-shapes-revisited/

H110Hawk
Dec 28, 2006
The concept of putting an OS on the highest performing disk is making my head hurt. I didn't realize people did that as a best practice. Is that mostly in small shops who don't have the resources to automate OS deployment? I feel like application data would want that space so your service responds quickly.

Fake edit: While most of my last few posts were joking, this has been bothering me for however many posts people have been saying it. I kept waiting for the hammer to drop.

thebigcow
Jan 3, 2001

Bully!

H110Hawk posted:

The concept of putting an OS on the highest performing disk is making my head hurt. I didn't realize people did that as a best practice. Is that mostly in small shops who don't have the resources to automate OS deployment? I feel like application data would want that space so your service responds quickly.

Fake edit: While most of my last few posts were joking, this has been bothering me for however many posts people have been saying it. I kept waiting for the hammer to drop.

His concept had application data on the fast white box disk and everything else on the most vendor supported disk, with the idea that the application would be written to handle the hiccups that might occur.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Vulture Culture posted:

I doubt this type of storage will ever come on-premises in any way more
meaningful than Gluster/Ceph, but Amazon's going to be killing it in the cloud if they can manage to pull off the EFS technical preview.

EFS is coming for sure. Without an NDA it's sufficient to say we are doing some really interesting things with EFS and Linux clusters for HA.

Now when the EFS team gets windows support in place we'll be seeing a lot more Windows cluster-based workloads appear, which has been a big roadblock for many windows customers on AWS.

Langolas
Feb 12, 2011

My mustache makes me sexy, not the hat

SSH IT ZOMBIE posted:

Is VSAN different than VPLEX? I've sorta been poking the storage engineers, VPLEX seems nice for data migration projects between data centers.

Very much so

vSan is your Software defined storage from Vmware
VPLEX is your HA to another datacenter so you can failover the backend without the hosts realizing it. Lots of other details for it but a VPLEX still requires other storage behind it to help facilitate DR.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


thebigcow posted:

His concept had application data on the fast white box disk and everything else on the most vendor supported disk, with the idea that the application would be written to handle the hiccups that might occur.

That's a way to get performance on the cheap, but it's completely backwards from where you want to be from a data perspective.

I want my special unicorn data on the most resilient storage possible. I don't give a gently caress about the OSs, that's something you can rebuild easily without having to go to backup. My data is the thing that needs the most uptime and recoverability.

You could scale out the white box solution to lower the impact and risk of a node going down, but then you rapidly start eating into the savings you realized while upping complexity.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
No doubt, and that's why it's not something I was advocating, merely mentioning the possibility of.

Methanar
Sep 26, 2013

by the sex ghost

Tab8715 posted:

I too also draw high-level diagrams in MS Paint. :)

I was inspired.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Methanar posted:

My understanding is that you have your expensive SAN for your boot drive, essentially, and you have all the actual data on his SmartOS storage. Every VM will be presented a LUN from at least two different hardware SmartOS storage providers. These two independent LUNs are software raided together to provide hardware failure resiliency.



Your mileage may vary! I would test this thoroughly before I rolled it out in production. The last time I tried this as a science experiment it resulted in my host getting a PSOD. Granted this was in the 3.5/4.0 days but the idea is contrived enough to still be risky.

WampaLord
Jan 14, 2010

I had a WebEx meeting with my boss, the VP of IT, and another IT person. About ten minutes in, the sound cut out. Tried re-calling in, only to get silence.

Meeting was cancelled due to technical difficulties. :haw:

mewse
May 2, 2006

WampaLord posted:

I had a WebEx meeting with my boss, the VP of IT, and another IT person. About ten minutes in, the sound cut out. Tried re-calling in, only to get silence.

Meeting was cancelled due to technical difficulties. :haw:

B-but the cloud ensures that services are always available...

DigitalMocking
Jun 8, 2010

Wine is constant proof that God loves us and loves to see us happy.
Benjamin Franklin

WampaLord posted:

I had a WebEx meeting with my boss, the VP of IT, and another IT person. About ten minutes in, the sound cut out. Tried re-calling in, only to get silence.

Meeting was cancelled due to technical difficulties. :haw:

I've spent 4 hours in a conference room today listening to the weather service to make sure the calls aren't prematurely hanging up and that quality is good.

Its 8.3 degrees in Melbourne Australia as of 4:50am.

Sepist
Dec 26, 2005

FUCK BITCHES, ROUTE PACKETS

Gravy Boat 2k
Accepted a job offer working at a VAR 5 minutes from my house :toot: Gonna be fun going from working on service provider equipment and just telling Cisco AS to fix all our poo poo to a pure enterprise project engineer, no support whatsoever and only working with fortune 500 clients. Should be exciting!

Neddy Seagoon
Oct 12, 2012

"Hi Everybody!"

DigitalMocking posted:

I've spent 4 hours in a conference room today listening to the weather service to make sure the calls aren't prematurely hanging up and that quality is good.

Its 8.3 degrees in Melbourne Australia as of 4:50am.

What up, fellow Melbourne IT Goon
:australia::hf::australia:

Adbot
ADBOT LOVES YOU

22 Eargesplitten
Oct 10, 2010



After being unemployed since early January, I finally have a job :yotj:. It's slightly less than I was making previously, but I get to work with VMWare, networking, and Windows Server, which means this could be my bridge out of helldesk.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply