|
I'm about to pull the trigger on a pair of Tegile hybrid arrays, anything truly bad I should know about?
|
# ? Apr 26, 2016 03:37 |
|
|
# ? May 9, 2024 01:19 |
|
Mr-Spain posted:I'm about to pull the trigger on a pair of Tegile hybrid arrays, anything truly bad I should know about? Nah, they're pretty solid.
|
# ? Apr 26, 2016 17:58 |
|
NippleFloss posted:Nah, they're pretty solid. Great, thanks.
|
# ? Apr 26, 2016 20:16 |
|
So we are trying to do an absolute 0 downtime failover system, they really want VMware so we are looking at doing array level syncing with 2 MSA 2040s, I can't get a non bullshit sales guy answer on exactly how synced it is and what the failover time is like at all, some have told us instant others say 15 minutes to 4 hours its all over the place.
|
# ? Apr 28, 2016 20:30 |
|
I don't have time to read the full MSA 2040 blurb but it looks like asynchronous replication which means you will have an RPO involved. I wouldn't be able to recommend such an entry-level SAN for something as critical as "absolutely 0 downtime".
|
# ? Apr 28, 2016 20:38 |
|
If you need absolutely 0 downtime and that's not something you know how to achieve, it's time bring in some experts. You don't just fall into 6-7 9s.
|
# ? Apr 28, 2016 20:53 |
|
Absolute Zero as an actual need has a massive budget behind it for new talent and consultants. You do have a massive budget, right?
|
# ? Apr 28, 2016 21:37 |
|
Yeah it usually means metro cluster and a very well thought out stack and DR solution.
|
# ? Apr 28, 2016 22:36 |
|
Around $2.6 trillion I think
|
# ? Apr 28, 2016 22:50 |
|
And the costs of the connectivity / general networking stuff. Nobody who isn't a liar will promise to give you no downtime, either.
|
# ? Apr 28, 2016 22:52 |
|
I'm currently working through the details of a datacenter move to a colo, with our link to it through our mpls. Current speed is only 100mbps, but we might be able to get it boosted to 1gbps for the rest of the contract. Our environment is almost entirely virtualized, a few of the guests have RDMs that i'd like to switch to vmdks if possible. My challenge is the guest migrations. What is my most effective method of migrating guests across the mpls, since we are also re-IPing everything anyways. Weekend downtime is acceptable, most guest storage luns are 2TB. I have the following tools available to me:
What would you use and why?
|
# ? Apr 29, 2016 12:45 |
|
devmd01 posted:. What is my most effective method of migrating guests across the mpls, since we are also re-IPing everything anyways. Weekend downtime is acceptable, most guest storage luns are 2TB. How far, physically? Last time I moved a datacenter, I rented a cheap-o turnkey NAS, had poo poo slowly duplicate for a few days ahead of time, and drove it fifty miles. Had I wrecked, no production stuff would have been harmed -- it was a well-insured rental device. Beat the hell out of using their lovely ISP.
|
# ? Apr 29, 2016 14:02 |
|
I would use Veeam. If you set it up as a replica it can re-IP stuff for you, it can do seeding, and if you have Enterprise it has WAN acceleration. If you have RDMs that are not physical, I'm pretty sure it will convert those to VMDKs as well. It can also stand stuff up in a sandbox environment so you can test a bit before cutting over. That being said I haven't used the Compellent tools and I'm not sure how SCCM ties in here.
|
# ? Apr 29, 2016 14:07 |
|
Potato Salad posted:How far, physically? 30 min drive downtown. I don't want to be spending my nights and weekends driving around the city with a bunch of hard drives in the trunk. Internet Explorer posted:I would use Veeam. If you set it up as a replica it can re-IP stuff for you, it can do seeding, and if you have Enterprise it has WAN acceleration. If you have RDMs that are not physical, I'm pretty sure it will convert those to VMDKs as well. It can also stand stuff up in a sandbox environment so you can test a bit before cutting over. That sounds worth looking into, thanks! Just SCCM for automation potential if needed.
|
# ? Apr 29, 2016 14:17 |
|
Is there any way to do 802.1x for VMs with basic vSphere or will I need to get a 1000v or something like that? I'm trying to PoC some 802.1x stuff but I don't actually want to set up a real infrastructure to test, I'd rather do it on my homelab or something. some kinda jackal fucked around with this message at 15:00 on Apr 29, 2016 |
# ? Apr 29, 2016 14:58 |
|
Internet Explorer posted:I would use Veeam. If you set it up as a replica it can re-IP stuff for you, it can do seeding, and if you have Enterprise it has WAN acceleration. If you have RDMs that are not physical, I'm pretty sure it will convert those to VMDKs as well. It can also stand stuff up in a sandbox environment so you can test a bit before cutting over. Agreed, VEEAM is the best option here. Volume based replication will require something like SRM to automate the adding to inventory and re-ip work. Veeam replication was built to do this.
|
# ? Apr 29, 2016 21:16 |
|
Just a heads up, new version of AppAssure (now Rapid Recovery) does VDMK captures and grabs your VMX config as well. It's a new major version so we're in testing and will hold off for a maintenance release before upgrading but it solves most of the headaches that made us consider moving to Veeam.
|
# ? Apr 29, 2016 21:22 |
|
NippleFloss posted:Agreed, VEEAM is the best option here. Volume based replication will require something like SRM to automate the adding to inventory and re-ip work. Veeam replication was built to do this. Veeam is great, testing your DR/replicated VMs in an isolated network to see of they really do come up is really really slick.
|
# ? Apr 30, 2016 07:06 |
|
socialsecurity posted:So we are trying to do an absolute 0 downtime failover system, they really want VMware so we are looking at doing array level syncing with 2 MSA 2040s, I can't get a non bullshit sales guy answer on exactly how synced it is and what the failover time is like at all, some have told us instant others say 15 minutes to 4 hours its all over the place. How can you say "absolute 0 downtime failover system" and "MSA 2040" in the same sentence? For "absolute 0 downtime" you need to be spending a shitload more than an MSA 2040 costs (or than two of them cost) and you need to be doing your resilience at the app level as well as counting on the array. At the very minimum you're looking at stuff like the (proper) EMC, 3PAR, Hitachi and the likes on the storage side in my view.
|
# ? May 1, 2016 14:04 |
|
Bitch Stewie posted:How can you say "absolute 0 downtime failover system" and "MSA 2040" in the same sentence?
|
# ? May 2, 2016 04:11 |
|
Vulture Culture posted:are lacking the adult emotional maturity to deal with it
|
# ? May 2, 2016 08:44 |
|
bull3964 posted:Pure has been flat out magical for us. NippleFloss posted:A lot of those savings are from zero blocks being elimated or simply not being allocated at all on the storage side, and just reserved in VMFS. Compression is nice on Nimble, but comparing datastore use to storage use is really misleading unless you're using thin provisioned VMDKs everywhere. What NippleFloss said. But maybe more exact: By default virtual disks are created in the Lazy Zeroed Thick format on VMFS, which means that the logical block range is reserved on the filesystem, but no zeroes or data is actually written until the Guest/VM requests it. And if you're curious, Reads made by the Guest to unused regions are automatically returned as zeroes. It's practically speaking quite similar to thin provisioning with respect to actual LUN data/block usage, but certainly not exactly the same. Both will come with some manner of write penalty when allocating new LBAs or blocks, which is why some use cases necessitate Eager Zeroed Thick disks. I'm of the opinion, however, that the write penalties on lazy zeroed thick disks are a one-time deal and serve to be pretty negligible in practice, to only be meaningfully exposed during initial [flawed] benchmarking efforts or something. Anyway, if you value your low utilization on the array, never copy disks using the Datastore Browser, and ensure your templates are not in Eager Zeroed Thick format unless that's the specific format you wanted your VMs to be in as well. Going from Eager Zeroed Thick directly back to Thick (or Lazy Zeroed Thick) via svMotion or cloning is not possible. A workaround is to go to Thin before Thick (i.e. in two svMotions). Edit: On that last point, it's technically possible without using hardware data movers (i.e. do it the slower way via the VMkernel) and with Skip-Zero being enabled, dependent on release/version. I can't really verify when or where that's possible, though. A KB should be published about this eventually, but this isn't really new behavior or information by any means. Kachunkachunk fucked around with this message at 17:12 on May 2, 2016 |
# ? May 2, 2016 17:03 |
|
Bitch Stewie posted:"absolute 0 downtime"
|
# ? May 2, 2016 17:16 |
|
Kachunkachunk posted:What NippleFloss said. But maybe more exact: By default virtual disks are created in the Lazy Zeroed Thick format on VMFS, which means that the logical block range is reserved on the filesystem, but no zeroes or data is actually written until the Guest/VM requests it. You missed what I said though. These file servers are 89% full. There is 1.8tb of actual real data in them. I'm not talking about free space dedupe. I actually have 1.8tb of data on two servers that reduces down to 120gb on disc.
|
# ? May 3, 2016 00:21 |
|
bull3964 posted:You missed what I said though. File servers are prime targets for de-duplication because there are usually a billion versions of the same, or slightly different, files out there. Nimble only does compression though, which was what I was responding to. Outside of special snowflake datasets the benefits of compression are limited to maybe 3x at the upper end.
|
# ? May 3, 2016 06:27 |
|
Anyone experienced this problem with white text on VMware horizon view for Linux?
|
# ? May 3, 2016 16:26 |
|
Is anyone running 6.0u2? I'm planning out an upcoming datacenter move and I'd like to bump us to 6 with a new vcenter install as we shuffle hosts over to the colo. We've held off from upgrating to 6 like a lot of people because of the nasty bugs, but it appears that those have been mostly resolved. Looking at the release notes, none of the known issues should affect our environment.
|
# ? May 5, 2016 12:27 |
|
We've had a bunch of customers on 6 for a while and other than the very early releases haven't run into serious issues.
|
# ? May 5, 2016 20:03 |
|
Been running 6.0u2 for a while now, no issues.
|
# ? May 5, 2016 20:28 |
|
bull3964 posted:You missed what I said though. NippleFloss posted:File servers are prime targets for de-duplication because there are usually a billion versions of the same, or slightly different, files out there. Nimble only does compression though, which was what I was responding to. Outside of special snowflake datasets the benefits of compression are limited to maybe 3x at the upper end. One way or another, Bull3964's use case is being served exceptionally well here. Glad to see, honestly!
|
# ? May 9, 2016 09:14 |
|
Can someone point me to the docker/containerization thread? I just walked out of a conference realizing that there's about 200 guys sitting on a goldmine but don't have the toolset to take full advantage of the technology yet, and are waiting for developers to catch up.
|
# ? May 10, 2016 01:36 |
|
Hadlock posted:Can someone point me to the docker/containerization thread? I just walked out of a conference realizing that there's about 200 guys sitting on a goldmine but don't have the toolset to take full advantage of the technology yet, and are waiting for developers to catch up. Pretty sure you're looking at it. Or the Linux thread. Don't think we have a container specific one or if we did it's passed on to the ages.
|
# ? May 10, 2016 05:34 |
|
There's a devops thread too that I think has some occasional discussion.
|
# ? May 10, 2016 05:48 |
|
Hadlock posted:Can someone point me to the docker/containerization thread? I just walked out of a conference realizing that there's about 200 guys sitting on a goldmine but don't have the toolset to take full advantage of the technology yet, and are waiting for developers to catch up. Containers aren't magic; they just kick the automation can down the road. You still have to deal with customization and they've got jack-all for security. It's a useful wrench in some circumstances.
|
# ? May 10, 2016 15:50 |
|
Bhodi posted:http://forums.somethingawful.com/showthread.php?threadid=3695559
|
# ? May 10, 2016 16:03 |
|
It is coming time to refresh the servers running our primary business software.. And we would like to go virtual. Currently have 3 servers, 2x of them are dedicated remote desktop hosts in a load balance configuration (using NLB and RDS Session Broker). With the 3rd being a MS SQL box running our application databases. The SQL databases are not that large, about 120GB in total and growing by a few GB every couple months. DB traffic is likely about 70% reads, 30% writes. Also have 1 Hyper-V host running some small utility servers such as WiFi controllers, WSUS, file sharing, domain controller, etc... Would like to add at least some redundancy to this setup, at least for the SQL side of things. Because right now there is none. I am trying to come up with 2 separate setups to present to the higher ups. A cheap option with a little redundancy, or a expensive option with much more redundancy. For the cheaper option I am thinking about 3x Hyper-V hosts, local storage in RAID10, preferably with SSDs. SQL server VM being replicated to at least 2 hosts via Hyper-V replica (I do understand the limitations with this; could be data loss in event of complete host failure; still better than what we have now). 10GbE links for shared-nothing live migration... For the more expensive option, 3 hosts, again with local storage, and something like StarWind VSAN, 10GbE links for sync of course... Hyper-V clustering. The CEO/owner is not opposed to paying a little more for better functionality, but I am not sure there would be anything in the budget for shared storage, much less redundant shared storage. Does this sound like a decent plan? Or am I way out of line here?
|
# ? May 11, 2016 15:36 |
|
That's completely feasible. You can run a whole lot on hosts these days (we run 120 VMs on a 3 host R610 setup_ and the workload you are describing is nowhere pushing the extremes of what is possible. If you're going to stick to small clusters (3 or 4 hosts per) there are direct attached storage solutions that can support redundant SAS to up to four hosts without massive costs if you don't want to deal with the storage replication stuff. The Dell MD34xx series comes to mind.
|
# ? May 11, 2016 16:27 |
|
BangersInMyKnickers posted:That's completely feasible. You can run a whole lot on hosts these days (we run 120 VMs on a 3 host R610 setup_ and the workload you are describing is nowhere pushing the extremes of what is possible. If you're going to stick to small clusters (3 or 4 hosts per) there are direct attached storage solutions that can support redundant SAS to up to four hosts without massive costs if you don't want to deal with the storage replication stuff. The Dell MD34xx series comes to mind. I agree. Look at the SAS shared storage stuff if you never plan on going above 3 hosts. It doesn't sound like you would ever need to.
|
# ? May 11, 2016 18:55 |
|
BangersInMyKnickers posted:That's completely feasible. You can run a whole lot on hosts these days (we run 120 VMs on a 3 host R610 setup_ and the workload you are describing is nowhere pushing the extremes of what is possible. If you're going to stick to small clusters (3 or 4 hosts per) there are direct attached storage solutions that can support redundant SAS to up to four hosts without massive costs if you don't want to deal with the storage replication stuff. The Dell MD34xx series comes to mind. Ouch, got some initial pricing on the MD34xx units... Umm... yeah, one unit configured is just about the same price alone as all the host hardware combined. Has anyone here ever messed with the StarWind VSAN software before? From what I can find online it looks great and designed for small 2-3 node deployments.
|
# ? May 11, 2016 20:01 |
|
|
# ? May 9, 2024 01:19 |
|
How much was your quote that an MD34xx is more than your 3 other hosts combined? What are you using as hosts? If you can't afford that, my opinion is that you should not be using shared storage. An MD34xx with a couple of hosts is about as cheap as I'd be willing to go. Can you do it for cheaper? Yes. Throw in a Synology or QNAP NAS or go the open source route and you can make it work. But you are really putting yourself in a bad situation. Have you looked at what Windows licensing is going to cost? Have you looked at how much your 10GbE switches are going to cost? If you are on that strict of a budget, I would consider using 2 hosts with enough local storage to run all of the servers on one host and backup to a NAS using something like Veeam Free or StorageCraft with a plan to restore to the other server if necessary. I wouldn't go the vSAN route to save money, because it will almost certainly bite you. Just my opinion, I am sure there will those who disagree. [Edit: You could also consider moving essential services to the cloud and then having one box for less important stuff like WSUS or your Wifi Controller.] Internet Explorer fucked around with this message at 20:12 on May 11, 2016 |
# ? May 11, 2016 20:09 |