|
MC Fruit Stripe posted:I can buy a 6tb external for like $160 from Costco any day of the week, what's EMC trying to pull? I do wonder if we're reaching the point where it's almost becoming cheaper and easier to buy a ton of consumer-grade storage and run it in a SAN setup, with appropriate redundancies. Kind of like how a lot of HPC clusters just throw consumer graphics cards at the problem because it's so much cheaper than the alternatives.
|
# ? May 23, 2016 20:34 |
|
|
# ? May 28, 2024 23:27 |
|
Scale out and vSAN is sort of getting us there. gently caress all that dual-ported SAS expense and hefty RAID controllers when most applications can be serviced from a mix of SSD and SATA with software handling the parity and caching.
|
# ? May 23, 2016 21:03 |
|
psydude posted:I do wonder if we're reaching the point where it's almost becoming cheaper and easier to buy a ton of consumer-grade storage and run it in a SAN setup, with appropriate redundancies. Kind of like how a lot of HPC clusters just throw consumer graphics cards at the problem because it's so much cheaper than the alternatives. The issue is that there is no way to provide appropriate redundancies at a software level given the sophistication most customers have. Software defined storage is getting closer, but it's not there yet. And when it is you'll still pay a bunch for licensing support because the software is the the expensive thing in SANs. There is stuff like VSAN or Datrium or ScaleIO that are getting closer to allowing commodity economics for hardware, but they aren't widely deployed or cheap. On the other hand applications can be built to expect unreliable storage, but that's not a universal solution.
|
# ? May 23, 2016 21:07 |
|
Thanks Ants posted:Scale out and vSAN is sort of getting us there. gently caress all that dual-ported SAS expense and hefty RAID controllers when most applications can be serviced from a mix of SSD and SATA with software handling the parity and caching. <cluster explodes for two straight days when VSAN node is rebuilt>
|
# ? May 23, 2016 21:08 |
|
NippleFloss posted:On the other hand applications can be built to expect unreliable storage, but that's not a universal solution. I think this has got to be the way to go. If you're developing for then you already handle nodes just disappearing, kick it out the cluster and bring up another one, so outside of crusty old legacy stuff that has to run as a single server instance and needs something like fault tolerance to maintain uptime, everything else will just run off local storage and deal with it if there's a failure.
|
# ? May 23, 2016 21:23 |
|
from now on any SAN outage should be referred to as a sandstorm
|
# ? May 23, 2016 21:31 |
|
go3 posted:from now on any SAN outage should be referred to as a sandstorm by Dellrude.
|
# ? May 23, 2016 21:45 |
|
For the love of god, any of you playing around with dcos on public servers need to secure your marathon instance. https://www.shodan.io/search?query=X-Marathon+
|
# ? May 23, 2016 21:54 |
|
psydude posted:I do wonder if we're reaching the point where it's almost becoming cheaper and easier to buy a ton of consumer-grade storage and run it in a SAN setup, with appropriate redundancies. Kind of like how a lot of HPC clusters just throw consumer graphics cards at the problem because it's so much cheaper than the alternatives. meaningful than Gluster/Ceph, but Amazon's going to be killing it in the cloud if they can manage to pull off the EFS technical preview.
|
# ? May 23, 2016 21:58 |
|
SaltLick posted:Are we sure it was natural? It wasn't natural, but it was a different sort of death. But I'm sure none of you want a short story about hijinks leading to a death. This isn't the Games forum, after all. Edit: Just kidding. I won't leave you hanging. Apparently, he went bungie jumping at some resort in Brazil or some other South American tourist trap and neither him nor the guy running the jump thought it was necessary to check that his line was attached. He was severely broken, but was gone by the time they got him to a hospital. Arsten fucked around with this message at 22:42 on May 23, 2016 |
# ? May 23, 2016 22:31 |
|
Whoops
|
# ? May 23, 2016 23:17 |
|
Is VSAN different than VPLEX? I've sorta been poking the storage engineers, VPLEX seems nice for data migration projects between data centers.
|
# ? May 24, 2016 01:52 |
|
psydude posted:I do wonder if we're reaching the point where it's almost becoming cheaper and easier to buy a ton of consumer-grade storage and run it in a SAN setup So what you're saying is that CE was a misunderstood pioneer.
|
# ? May 24, 2016 02:34 |
|
go3 posted:from now on any SAN outage should be referred to as a sandstorm Lord Dudeguy posted:by Dellrude. I loving love this forum. I've had a lovely week so far(yeah I know it's Monday.) and this made my day.
|
# ? May 24, 2016 02:36 |
|
Thanks Ants posted:I think this has got to be the way to go. If you're developing for then you already handle nodes just disappearing, kick it out the cluster and bring up another one, so outside of crusty old legacy stuff that has to run as a single server instance and needs something like fault tolerance to maintain uptime, everything else will just run off local storage and deal with it if there's a failure. It's actually pretty hard to do this and some application types don't lend themselves to this sort of distributed work. Some things are trending that way, but it's a slow transition. Storage arrays are getting cheaper, faster, and easier all the time, so the benefits of commodity are lessened. Things like caching and failure behavior can be trickier to build around in a distributed environment. Node rebuilds on things like Isilon or VSAN or Nutanix can have a pretty significant impact, both from the rebuild activity and the loss of cached data.
|
# ? May 24, 2016 02:43 |
|
NippleFloss posted:The issue is that there is no way to provide appropriate redundancies at a software level given the sophistication most customers have. Software defined storage is getting closer, but it's not there yet. And when it is you'll still pay a bunch for licensing support because the software is the the expensive thing in SANs. edit: I don't do this because I am the only person on my team that could support it
|
# ? May 24, 2016 04:13 |
|
adorai posted:You can really reduce your need for a real SAN though. Use your (expensive) SAN for OS, and pass two or more RDM disks into each guest for any application data. The RDM disks come from cheap as gently caress roll your own storage, and then use OS level software raid1 inside the guest. I can build a SmartOS node running raid0 on flash that will loving own most SANs for dirt cheap, but has zero redundancy. If I use my application to provide some redundancy, I can really do some neat things. Could you go more in depth about this. I'm genuinely interested.
|
# ? May 24, 2016 04:15 |
|
Sure. I can build a SmartOS server that holds 8 SSDs and 16GB of ECC RAM for roughly $2000. This $2000 "server" will have no redundancy, but will be able to provide extremely high IO performance for both sequential and random IO. Throughput will also be good but not great, as my hypothetical server only has 2 gigabit NICs instead of 10GBe. It will probably be able to provide more IOPS than a $30k Nimble array. I can pass the storage from my SmartOS server into VMware in a number of ways, but for the sake of this example, we'll assume iSCSI. If I created a VMware datastore on it, my VM would perform well, until I lost a drive or the server itself died. It's cheaper to build a second server than it is to make my original one HA, so I do that. Unfortunately, there is no way for VMware to mirror a VMFS datastore, so I still don't have a good solution. Instead, I could leverage my expensive HA SAN to host the VM itself, and then just use my SmartOS servers for the application data that I store on my E: drive. I can use a Raw Disk Mapping in VMware to pass an iSCSI LUN directly into a guest OS to store this guest application data on. Windows, linux, and most other operating systems provide some kind of in OS RAID solution, so if I pass an RDM from each storage server into the guest, and then RAID1 them, if either one goes down the guest is still accessing the other one. Instead of a $30k Nimble array, I can use a $10k HA SAN to host the VMware guest and two $2k storage servers to host the app data. My all in cost is $14k, a significant savings. I have no throats to choke if something goes wrong, and when I get by a bus or something the business is in a bad place, but it can be done. Inexpensively. Or cheap, depends on who you ask.
|
# ? May 24, 2016 04:38 |
|
yeah... but you could end up having to do all that at around 2am.
|
# ? May 24, 2016 05:31 |
|
adorai posted:Sure. I can build a SmartOS server that holds 8 SSDs and 16GB of ECC RAM for roughly $2000. This $2000 "server" will have no redundancy, but will be able to provide extremely high IO performance for both sequential and random IO. Throughput will also be good but not great, as my hypothetical server only has 2 gigabit NICs instead of 10GBe. It will probably be able to provide more IOPS than a $30k Nimble array. I can pass the storage from my SmartOS server into VMware in a number of ways, but for the sake of this example, we'll assume iSCSI. If I created a VMware datastore on it, my VM would perform well, until I lost a drive or the server itself died. It's cheaper to build a second server than it is to make my original one HA, so I do that. Unfortunately, there is no way for VMware to mirror a VMFS datastore, so I still don't have a good solution. Instead, I could leverage my expensive HA SAN to host the VM itself, and then just use my SmartOS servers for the application data that I store on my E: drive. I can use a Raw Disk Mapping in VMware to pass an iSCSI LUN directly into a guest OS to store this guest application data on. Windows, linux, and most other operating systems provide some kind of in OS RAID solution, so if I pass an RDM from each storage server into the guest, and then RAID1 them, if either one goes down the guest is still accessing the other one. Instead of a $30k Nimble array, I can use a $10k HA SAN to host the VMware guest and two $2k storage servers to host the app data. My all in cost is $14k, a significant savings. I'm getting lost here, may you go into a little more detail?
|
# ? May 24, 2016 05:54 |
|
My understanding is that you have your expensive SAN for your boot drive, essentially, and you have all the actual data on his SmartOS storage. Every VM will be presented a LUN from at least two different hardware SmartOS storage providers. These two independent LUNs are software raided together to provide hardware failure resiliency. Methanar fucked around with this message at 06:09 on May 24, 2016 |
# ? May 24, 2016 06:03 |
|
Vulture Culture posted:I too intentionally write job descriptions that disproportionately discourage women from applying Not trying to defend these kind of shennanigans, but the hiring manager was a woman and so is 35-40% of the rest of my department.
|
# ? May 24, 2016 07:08 |
|
Methanar posted:My understanding is that you have your expensive SAN for your boot drive, essentially, and you have all the actual data on his SmartOS storage. Every VM will be presented a LUN from at least two different hardware SmartOS storage providers. These two independent LUNs are software raided together to provide hardware failure resiliency. That's kind of how I'm viewing it but I too also draw high-level diagrams in MS Paint.
|
# ? May 24, 2016 07:24 |
|
adorai posted:Sure. I can build a SmartOS server that holds 8 SSDs and 16GB of ECC RAM for roughly $2000. This $2000 "server" will have no redundancy, but will be able to provide extremely high IO performance for both sequential and random IO. Throughput will also be good but not great, as my hypothetical server only has 2 gigabit NICs instead of 10GBe. It will probably be able to provide more IOPS than a $30k Nimble array. I can pass the storage from my SmartOS server into VMware in a number of ways, but for the sake of this example, we'll assume iSCSI. If I created a VMware datastore on it, my VM would perform well, until I lost a drive or the server itself died. It's cheaper to build a second server than it is to make my original one HA, so I do that. Unfortunately, there is no way for VMware to mirror a VMFS datastore, so I still don't have a good solution. Instead, I could leverage my expensive HA SAN to host the VM itself, and then just use my SmartOS servers for the application data that I store on my E: drive. I can use a Raw Disk Mapping in VMware to pass an iSCSI LUN directly into a guest OS to store this guest application data on. Windows, linux, and most other operating systems provide some kind of in OS RAID solution, so if I pass an RDM from each storage server into the guest, and then RAID1 them, if either one goes down the guest is still accessing the other one. Instead of a $30k Nimble array, I can use a $10k HA SAN to host the VMware guest and two $2k storage servers to host the app data. My all in cost is $14k, a significant savings. This post is so stupid I think I have cancer now.
|
# ? May 24, 2016 08:01 |
|
LochNessMonster posted:Not trying to defend these kind of shennanigans, but the hiring manager was a woman and so is 35-40% of the rest of my department.
|
# ? May 24, 2016 10:55 |
|
Tab8715 posted:That's kind of how I'm viewing it but I too also draw high-level diagrams in MS Paint.
|
# ? May 24, 2016 11:47 |
|
The concept of putting an OS on the highest performing disk is making my head hurt. I didn't realize people did that as a best practice. Is that mostly in small shops who don't have the resources to automate OS deployment? I feel like application data would want that space so your service responds quickly. Fake edit: While most of my last few posts were joking, this has been bothering me for however many posts people have been saying it. I kept waiting for the hammer to drop.
|
# ? May 24, 2016 15:14 |
|
H110Hawk posted:The concept of putting an OS on the highest performing disk is making my head hurt. I didn't realize people did that as a best practice. Is that mostly in small shops who don't have the resources to automate OS deployment? I feel like application data would want that space so your service responds quickly. His concept had application data on the fast white box disk and everything else on the most vendor supported disk, with the idea that the application would be written to handle the hiccups that might occur.
|
# ? May 24, 2016 15:56 |
|
Vulture Culture posted:I doubt this type of storage will ever come on-premises in any way more EFS is coming for sure. Without an NDA it's sufficient to say we are doing some really interesting things with EFS and Linux clusters for HA. Now when the EFS team gets windows support in place we'll be seeing a lot more Windows cluster-based workloads appear, which has been a big roadblock for many windows customers on AWS.
|
# ? May 24, 2016 16:17 |
SSH IT ZOMBIE posted:Is VSAN different than VPLEX? I've sorta been poking the storage engineers, VPLEX seems nice for data migration projects between data centers. Very much so vSan is your Software defined storage from Vmware VPLEX is your HA to another datacenter so you can failover the backend without the hosts realizing it. Lots of other details for it but a VPLEX still requires other storage behind it to help facilitate DR.
|
|
# ? May 24, 2016 16:17 |
|
thebigcow posted:His concept had application data on the fast white box disk and everything else on the most vendor supported disk, with the idea that the application would be written to handle the hiccups that might occur. That's a way to get performance on the cheap, but it's completely backwards from where you want to be from a data perspective. I want my special unicorn data on the most resilient storage possible. I don't give a gently caress about the OSs, that's something you can rebuild easily without having to go to backup. My data is the thing that needs the most uptime and recoverability. You could scale out the white box solution to lower the impact and risk of a node going down, but then you rapidly start eating into the savings you realized while upping complexity.
|
# ? May 24, 2016 16:20 |
|
No doubt, and that's why it's not something I was advocating, merely mentioning the possibility of.
|
# ? May 24, 2016 16:33 |
|
Tab8715 posted:I too also draw high-level diagrams in MS Paint. I was inspired.
|
# ? May 24, 2016 16:40 |
|
Methanar posted:My understanding is that you have your expensive SAN for your boot drive, essentially, and you have all the actual data on his SmartOS storage. Every VM will be presented a LUN from at least two different hardware SmartOS storage providers. These two independent LUNs are software raided together to provide hardware failure resiliency. Your mileage may vary! I would test this thoroughly before I rolled it out in production. The last time I tried this as a science experiment it resulted in my host getting a PSOD. Granted this was in the 3.5/4.0 days but the idea is contrived enough to still be risky.
|
# ? May 24, 2016 19:22 |
|
I had a WebEx meeting with my boss, the VP of IT, and another IT person. About ten minutes in, the sound cut out. Tried re-calling in, only to get silence. Meeting was cancelled due to technical difficulties.
|
# ? May 24, 2016 19:47 |
|
WampaLord posted:I had a WebEx meeting with my boss, the VP of IT, and another IT person. About ten minutes in, the sound cut out. Tried re-calling in, only to get silence. B-but the cloud ensures that services are always available...
|
# ? May 24, 2016 19:49 |
|
WampaLord posted:I had a WebEx meeting with my boss, the VP of IT, and another IT person. About ten minutes in, the sound cut out. Tried re-calling in, only to get silence. I've spent 4 hours in a conference room today listening to the weather service to make sure the calls aren't prematurely hanging up and that quality is good. Its 8.3 degrees in Melbourne Australia as of 4:50am.
|
# ? May 24, 2016 20:18 |
|
Accepted a job offer working at a VAR 5 minutes from my house Gonna be fun going from working on service provider equipment and just telling Cisco AS to fix all our poo poo to a pure enterprise project engineer, no support whatsoever and only working with fortune 500 clients. Should be exciting!
|
# ? May 25, 2016 01:19 |
|
DigitalMocking posted:I've spent 4 hours in a conference room today listening to the weather service to make sure the calls aren't prematurely hanging up and that quality is good. What up, fellow Melbourne IT Goon
|
# ? May 25, 2016 01:22 |
|
|
# ? May 28, 2024 23:27 |
|
After being unemployed since early January, I finally have a job . It's slightly less than I was making previously, but I get to work with VMWare, networking, and Windows Server, which means this could be my bridge out of helldesk.
|
# ? May 25, 2016 01:37 |