Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mr-Spain
Aug 27, 2003

Bullshit... you can be mine.
I'm about to pull the trigger on a pair of Tegile hybrid arrays, anything truly bad I should know about?

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Mr-Spain posted:

I'm about to pull the trigger on a pair of Tegile hybrid arrays, anything truly bad I should know about?

Nah, they're pretty solid.

Mr-Spain
Aug 27, 2003

Bullshit... you can be mine.

NippleFloss posted:

Nah, they're pretty solid.

Great, thanks.

socialsecurity
Aug 30, 2003

So we are trying to do an absolute 0 downtime failover system, they really want VMware so we are looking at doing array level syncing with 2 MSA 2040s, I can't get a non bullshit sales guy answer on exactly how synced it is and what the failover time is like at all, some have told us instant others say 15 minutes to 4 hours its all over the place.

Thanks Ants
May 21, 2004

#essereFerrari


I don't have time to read the full MSA 2040 blurb but it looks like asynchronous replication which means you will have an RPO involved. I wouldn't be able to recommend such an entry-level SAN for something as critical as "absolutely 0 downtime".

Internet Explorer
Jun 1, 2005





If you need absolutely 0 downtime and that's not something you know how to achieve, it's time bring in some experts. You don't just fall into 6-7 9s.

Potato Salad
Oct 23, 2014

nobody cares


Absolute Zero as an actual need has a massive budget behind it for new talent and consultants.

You do have a massive budget, right? :corsair:

evil_bunnY
Apr 2, 2003

Yeah it usually means metro cluster and a very well thought out stack and DR solution.

stubblyhead
Sep 13, 2007

That is treason, Johnny!

Fun Shoe
Around $2.6 trillion I think :v:

Thanks Ants
May 21, 2004

#essereFerrari


And the costs of the connectivity / general networking stuff.

Nobody who isn't a liar will promise to give you no downtime, either.

devmd01
Mar 7, 2006

Elektronik
Supersonik
I'm currently working through the details of a datacenter move to a colo, with our link to it through our mpls. Current speed is only 100mbps, but we might be able to get it boosted to 1gbps for the rest of the contract. Our environment is almost entirely virtualized, a few of the guests have RDMs that i'd like to switch to vmdks if possible.

My challenge is the guest migrations. What is my most effective method of migrating guests across the mpls, since we are also re-IPing everything anyways. Weekend downtime is acceptable, most guest storage luns are 2TB. I have the following tools available to me:

  • Compellent Volume Replication
  • Veeam Enterprise
  • vSphere 5.5/6.x
  • SCCM 2012R2
  • Stupid homegrown scripts

What would you use and why?

Potato Salad
Oct 23, 2014

nobody cares


devmd01 posted:

. What is my most effective method of migrating guests across the mpls, since we are also re-IPing everything anyways. Weekend downtime is acceptable, most guest storage luns are 2TB.

What would you use and why?


How far, physically?

Last time I moved a datacenter, I rented a cheap-o turnkey NAS, had poo poo slowly duplicate for a few days ahead of time, and drove it fifty miles. Had I wrecked, no production stuff would have been harmed -- it was a well-insured rental device. Beat the hell out of using their lovely ISP.

Internet Explorer
Jun 1, 2005





I would use Veeam. If you set it up as a replica it can re-IP stuff for you, it can do seeding, and if you have Enterprise it has WAN acceleration. If you have RDMs that are not physical, I'm pretty sure it will convert those to VMDKs as well. It can also stand stuff up in a sandbox environment so you can test a bit before cutting over.

That being said I haven't used the Compellent tools and I'm not sure how SCCM ties in here.

devmd01
Mar 7, 2006

Elektronik
Supersonik

Potato Salad posted:

How far, physically?

Last time I moved a datacenter, I rented a cheap-o turnkey NAS, had poo poo slowly duplicate for a few days ahead of time, and drove it fifty miles. Had I wrecked, no production stuff would have been harmed -- it was a well-insured rental device. Beat the hell out of using their lovely ISP.

30 min drive downtown. I don't want to be spending my nights and weekends driving around the city with a bunch of hard drives in the trunk.

Internet Explorer posted:

I would use Veeam. If you set it up as a replica it can re-IP stuff for you, it can do seeding, and if you have Enterprise it has WAN acceleration. If you have RDMs that are not physical, I'm pretty sure it will convert those to VMDKs as well. It can also stand stuff up in a sandbox environment so you can test a bit before cutting over.

That being said I haven't used the Compellent tools and I'm not sure how SCCM ties in here.

That sounds worth looking into, thanks!

Just SCCM for automation potential if needed.

some kinda jackal
Feb 25, 2003

 
 
Is there any way to do 802.1x for VMs with basic vSphere or will I need to get a 1000v or something like that?

I'm trying to PoC some 802.1x stuff but I don't actually want to set up a real infrastructure to test, I'd rather do it on my homelab or something.

some kinda jackal fucked around with this message at 15:00 on Apr 29, 2016

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Internet Explorer posted:

I would use Veeam. If you set it up as a replica it can re-IP stuff for you, it can do seeding, and if you have Enterprise it has WAN acceleration. If you have RDMs that are not physical, I'm pretty sure it will convert those to VMDKs as well. It can also stand stuff up in a sandbox environment so you can test a bit before cutting over.

That being said I haven't used the Compellent tools and I'm not sure how SCCM ties in here.

Agreed, VEEAM is the best option here. Volume based replication will require something like SRM to automate the adding to inventory and re-ip work. Veeam replication was built to do this.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Just a heads up, new version of AppAssure (now Rapid Recovery) does VDMK captures and grabs your VMX config as well. It's a new major version so we're in testing and will hold off for a maintenance release before upgrading but it solves most of the headaches that made us consider moving to Veeam.

Mr Shiny Pants
Nov 12, 2012

NippleFloss posted:

Agreed, VEEAM is the best option here. Volume based replication will require something like SRM to automate the adding to inventory and re-ip work. Veeam replication was built to do this.

Veeam is great, testing your DR/replicated VMs in an isolated network to see of they really do come up is really really slick.

Bitch Stewie
Dec 17, 2011

socialsecurity posted:

So we are trying to do an absolute 0 downtime failover system, they really want VMware so we are looking at doing array level syncing with 2 MSA 2040s, I can't get a non bullshit sales guy answer on exactly how synced it is and what the failover time is like at all, some have told us instant others say 15 minutes to 4 hours its all over the place.

How can you say "absolute 0 downtime failover system" and "MSA 2040" in the same sentence?

For "absolute 0 downtime" you need to be spending a shitload more than an MSA 2040 costs (or than two of them cost) and you need to be doing your resilience at the app level as well as counting on the array.

At the very minimum you're looking at stuff like the (proper) EMC, 3PAR, Hitachi and the likes on the storage side in my view.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bitch Stewie posted:

How can you say "absolute 0 downtime failover system" and "MSA 2040" in the same sentence?

For "absolute 0 downtime" you need to be spending a shitload more than an MSA 2040 costs (or than two of them cost) and you need to be doing your resilience at the app level as well as counting on the array.

At the very minimum you're looking at stuff like the (proper) EMC, 3PAR, Hitachi and the likes on the storage side in my view.
Agreed here, everyone knows downtime is bad, but if anyone in your organization is going "absolutely 0 downtime!!!" then it's clear they don't understand the exponential costs to add more reliability or are lacking the adult emotional maturity to deal with it

evil_bunnY
Apr 2, 2003

Vulture Culture posted:

are lacking the adult emotional maturity to deal with it
Borrowing this quote for next time someone asks me what the reliability they're dreaming of costs.

Kachunkachunk
Jun 6, 2011

bull3964 posted:

Pure has been flat out magical for us.

One of my LUNs has 2 2tb file servers (DFS replication so they both have the same data). The VMs themselves are nearly at capacity, the LUN is only reporting 120gb used.

4 tb in VMware down to 120gb. Witchcraft.

NippleFloss posted:

A lot of those savings are from zero blocks being elimated or simply not being allocated at all on the storage side, and just reserved in VMFS. Compression is nice on Nimble, but comparing datastore use to storage use is really misleading unless you're using thin provisioned VMDKs everywhere.

I've also seen some customers that don't get nearly the advertised compression ratios, though many do. I do think that the lack of deduplication is a problem for Nimble in competitive situations though. It's coming on their flash systems at least though, so that's a good sign.

Pure is awesome if you can afford it.

What NippleFloss said. But maybe more exact: By default virtual disks are created in the Lazy Zeroed Thick format on VMFS, which means that the logical block range is reserved on the filesystem, but no zeroes or data is actually written until the Guest/VM requests it. And if you're curious, Reads made by the Guest to unused regions are automatically returned as zeroes. It's practically speaking quite similar to thin provisioning with respect to actual LUN data/block usage, but certainly not exactly the same. Both will come with some manner of write penalty when allocating new LBAs or blocks, which is why some use cases necessitate Eager Zeroed Thick disks. I'm of the opinion, however, that the write penalties on lazy zeroed thick disks are a one-time deal and serve to be pretty negligible in practice, to only be meaningfully exposed during initial [flawed] benchmarking efforts or something.

Anyway, if you value your low utilization on the array, never copy disks using the Datastore Browser, and ensure your templates are not in Eager Zeroed Thick format unless that's the specific format you wanted your VMs to be in as well. Going from Eager Zeroed Thick directly back to Thick (or Lazy Zeroed Thick) via svMotion or cloning is not possible. A workaround is to go to Thin before Thick (i.e. in two svMotions).

Edit: On that last point, it's technically possible without using hardware data movers (i.e. do it the slower way via the VMkernel) and with Skip-Zero being enabled, dependent on release/version. I can't really verify when or where that's possible, though. A KB should be published about this eventually, but this isn't really new behavior or information by any means.

Kachunkachunk fucked around with this message at 17:12 on May 2, 2016

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Bitch Stewie posted:

"absolute 0 downtime"
The great part about zero downtime is that you can save all kinds of money not preparing for downtime, since it wont ever happen anymore.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Kachunkachunk posted:

What NippleFloss said. But maybe more exact: By default virtual disks are created in the Lazy Zeroed Thick format on VMFS, which means that the logical block range is reserved on the filesystem, but no zeroes or data is actually written until the Guest/VM requests it.

You missed what I said though.

These file servers are 89% full. There is 1.8tb of actual real data in them. I'm not talking about free space dedupe. I actually have 1.8tb of data on two servers that reduces down to 120gb on disc.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

bull3964 posted:

You missed what I said though.

These file servers are 89% full. There is 1.8tb of actual real data in them. I'm not talking about free space dedupe. I actually have 1.8tb of data on two servers that reduces down to 120gb on disc.

File servers are prime targets for de-duplication because there are usually a billion versions of the same, or slightly different, files out there. Nimble only does compression though, which was what I was responding to. Outside of special snowflake datasets the benefits of compression are limited to maybe 3x at the upper end.

mewse
May 2, 2006

Anyone experienced this problem with white text on VMware horizon view for Linux?

devmd01
Mar 7, 2006

Elektronik
Supersonik
Is anyone running 6.0u2? I'm planning out an upcoming datacenter move and I'd like to bump us to 6 with a new vcenter install as we shuffle hosts over to the colo. We've held off from upgrating to 6 like a lot of people because of the nasty bugs, but it appears that those have been mostly resolved. Looking at the release notes, none of the known issues should affect our environment.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

We've had a bunch of customers on 6 for a while and other than the very early releases haven't run into serious issues.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Been running 6.0u2 for a while now, no issues.

Kachunkachunk
Jun 6, 2011

bull3964 posted:

You missed what I said though.

These file servers are 89% full. There is 1.8tb of actual real data in them. I'm not talking about free space dedupe. I actually have 1.8tb of data on two servers that reduces down to 120gb on disc.

NippleFloss posted:

File servers are prime targets for de-duplication because there are usually a billion versions of the same, or slightly different, files out there. Nimble only does compression though, which was what I was responding to. Outside of special snowflake datasets the benefits of compression are limited to maybe 3x at the upper end.

One way or another, Bull3964's use case is being served exceptionally well here. Glad to see, honestly!

Hadlock
Nov 9, 2004

Can someone point me to the docker/containerization thread? I just walked out of a conference realizing that there's about 200 guys sitting on a goldmine but don't have the toolset to take full advantage of the technology yet, and are waiting for developers to catch up.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Hadlock posted:

Can someone point me to the docker/containerization thread? I just walked out of a conference realizing that there's about 200 guys sitting on a goldmine but don't have the toolset to take full advantage of the technology yet, and are waiting for developers to catch up.

Pretty sure you're looking at it. Or the Linux thread. Don't think we have a container specific one or if we did it's passed on to the ages.

stubblyhead
Sep 13, 2007

That is treason, Johnny!

Fun Shoe
There's a devops thread too that I think has some occasional discussion.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Hadlock posted:

Can someone point me to the docker/containerization thread? I just walked out of a conference realizing that there's about 200 guys sitting on a goldmine but don't have the toolset to take full advantage of the technology yet, and are waiting for developers to catch up.
http://forums.somethingawful.com/showthread.php?threadid=3695559

Containers aren't magic; they just kick the automation can down the road. You still have to deal with customization and they've got jack-all for security. It's a useful wrench in some circumstances.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bhodi posted:

http://forums.somethingawful.com/showthread.php?threadid=3695559

Containers aren't magic; they just kick the automation can down the road. You still have to deal with customization and they've got jack-all for security. It's a useful wrench in some circumstances.
Let's not understate the value of kicking the automation can, though. Other development disciplines figured out that the God Object design pattern was a really, really bad idea many decades ago, but systems engineers have been really eager to reinvent the wheel by stuffing as much logic into Chef, Puppet, etc. as they possibly can. Service discovery, secret management, view logic, ad-hoc packaging, management of running processes, it's a one-stop shop. The best benefit of app containerization (in the Docker sense, rather than the lxc/jails/zones approach) is that it provides a clear, enforceable separation of concerns.

stevewm
May 10, 2005
It is coming time to refresh the servers running our primary business software.. And we would like to go virtual.

Currently have 3 servers, 2x of them are dedicated remote desktop hosts in a load balance configuration (using NLB and RDS Session Broker). With the 3rd being a MS SQL box running our application databases. The SQL databases are not that large, about 120GB in total and growing by a few GB every couple months. DB traffic is likely about 70% reads, 30% writes. Also have 1 Hyper-V host running some small utility servers such as WiFi controllers, WSUS, file sharing, domain controller, etc...

Would like to add at least some redundancy to this setup, at least for the SQL side of things. Because right now there is none. I am trying to come up with 2 separate setups to present to the higher ups. A cheap option with a little redundancy, or a expensive option with much more redundancy.

For the cheaper option I am thinking about 3x Hyper-V hosts, local storage in RAID10, preferably with SSDs. SQL server VM being replicated to at least 2 hosts via Hyper-V replica (I do understand the limitations with this; could be data loss in event of complete host failure; still better than what we have now). 10GbE links for shared-nothing live migration...

For the more expensive option, 3 hosts, again with local storage, and something like StarWind VSAN, 10GbE links for sync of course... Hyper-V clustering.

The CEO/owner is not opposed to paying a little more for better functionality, but I am not sure there would be anything in the budget for shared storage, much less redundant shared storage.

Does this sound like a decent plan? Or am I way out of line here?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

That's completely feasible. You can run a whole lot on hosts these days (we run 120 VMs on a 3 host R610 setup_ and the workload you are describing is nowhere pushing the extremes of what is possible. If you're going to stick to small clusters (3 or 4 hosts per) there are direct attached storage solutions that can support redundant SAS to up to four hosts without massive costs if you don't want to deal with the storage replication stuff. The Dell MD34xx series comes to mind.

Internet Explorer
Jun 1, 2005





BangersInMyKnickers posted:

That's completely feasible. You can run a whole lot on hosts these days (we run 120 VMs on a 3 host R610 setup_ and the workload you are describing is nowhere pushing the extremes of what is possible. If you're going to stick to small clusters (3 or 4 hosts per) there are direct attached storage solutions that can support redundant SAS to up to four hosts without massive costs if you don't want to deal with the storage replication stuff. The Dell MD34xx series comes to mind.

I agree. Look at the SAS shared storage stuff if you never plan on going above 3 hosts. It doesn't sound like you would ever need to.

stevewm
May 10, 2005

BangersInMyKnickers posted:

That's completely feasible. You can run a whole lot on hosts these days (we run 120 VMs on a 3 host R610 setup_ and the workload you are describing is nowhere pushing the extremes of what is possible. If you're going to stick to small clusters (3 or 4 hosts per) there are direct attached storage solutions that can support redundant SAS to up to four hosts without massive costs if you don't want to deal with the storage replication stuff. The Dell MD34xx series comes to mind.

Ouch, got some initial pricing on the MD34xx units... Umm... yeah, one unit configured is just about the same price alone as all the host hardware combined.
Has anyone here ever messed with the StarWind VSAN software before? From what I can find online it looks great and designed for small 2-3 node deployments.

Adbot
ADBOT LOVES YOU

Internet Explorer
Jun 1, 2005





How much was your quote that an MD34xx is more than your 3 other hosts combined? What are you using as hosts?

If you can't afford that, my opinion is that you should not be using shared storage. An MD34xx with a couple of hosts is about as cheap as I'd be willing to go. Can you do it for cheaper? Yes. Throw in a Synology or QNAP NAS or go the open source route and you can make it work. But you are really putting yourself in a bad situation. Have you looked at what Windows licensing is going to cost? Have you looked at how much your 10GbE switches are going to cost?

If you are on that strict of a budget, I would consider using 2 hosts with enough local storage to run all of the servers on one host and backup to a NAS using something like Veeam Free or StorageCraft with a plan to restore to the other server if necessary. I wouldn't go the vSAN route to save money, because it will almost certainly bite you.

Just my opinion, I am sure there will those who disagree.

[Edit: You could also consider moving essential services to the cloud and then having one box for less important stuff like WSUS or your Wifi Controller.]

Internet Explorer fucked around with this message at 20:12 on May 11, 2016

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply