|
Finally got Nutanix to give us a demo box to learn how to deploy and configure. So far, really good. However, management has informed me that my recommendation to proceed or wait is key for a new project that would involve 20+ nodes. We typically build environments with 3~5 hosts, so this one being larger, with the first time deploying Nutanix on the field, is making me a bit more cautions on giving a thumbs up. Right now, our standard build is HPE c7000 BladeSystem with 3PAR SAN, so my mission is basically to identify changes to our process (which is very poorly documented ) and identify risk/mitigation as needed. Good times ahead.
|
# ? Aug 17, 2017 19:55 |
|
|
# ? May 30, 2024 21:18 |
|
Lean on Nutanix to get you hooked up with someone who can help you through the whole thing so you don't gently caress it up. You might have to take a margin hit first time around but it will be worth it, then you can get more of the team trained up.
|
# ? Aug 17, 2017 20:21 |
|
Operating a Nutanix cluster doesn't really require much training. That's part of the appeal. They can cover everything you need to know during the installation. BUT: I'm not a fan of large Nutanix clusters, nor clusters with high storage IO requirements, especially not if those workloads are monolithic, like a few very busy databases. We've had some poor customer experiences with Nutanix in those situations, even with all flash clusters.
|
# ? Aug 17, 2017 22:33 |
|
Thanks for the feedback. That's pretty much our situation: a few SQL servers that get hammered by lovely App servers. Starting to not get too excited about this now. gently caress.
|
# ? Aug 17, 2017 23:32 |
|
Alfajor posted:Thanks for the feedback. Let them know your cautious about these servers? Maybe they will offer you to test it out and see if their systems stack up.
|
# ? Aug 18, 2017 05:50 |
|
Does anyone have any recommendations for guides to a beginner on VMware? I've got to train someone to become a VMware admin and it's been a long time since I've read through Mastering vCenter. Any online guides or videos that might be good?
|
# ? Aug 18, 2017 18:40 |
|
^ https://labs.hol.vmware.com/HOL/catalogs/ Mr Shiny Pants posted:Let them know your cautious about these servers? Maybe they will offer you to test it out and see if their systems stack up.
|
# ? Aug 19, 2017 01:02 |
|
looooooool All disks in one particular node got angry. On boot you can see the SSD initialization fail. vmkernel.log logs events for each drive saying "No filesystem on the device." Seems a bit coincidental for all drives to bomb out at the same time. I can still at least see valid partition tables/etc on them so I'm hard pressed to believe it's a hardware failure. This seems to have happened after the 6.0 to 6.5 upgrade I pushed via VUM - should I try re-running the 6.5 upgrade for shits and giggles? This is my home lab and I've got backups so it's not like it's the end of the world by any stretch, I'm now more interested in seeing if I can bring it back to life and validate the hardware isn't jacked.
|
# ? Aug 19, 2017 06:11 |
|
Are all the disks from the same batch? It's not unheard of in those cases for the failure of one to be rapidly followed by the failure of the others. If not I'd suspect a controller issue rather than the disks themselves.
|
# ? Aug 19, 2017 15:18 |
|
It's likely not a hardware issue at all and the recommended fix from VMware, if you go that far down the rabbit hole, will to rebuild the disk group on that host. VSAN throws errors and occasionally suffers failures like that for no discernible reason. It's not fully baked.
|
# ? Aug 19, 2017 16:53 |
|
Yeah it's four spindles, one SSD for VSAN and one small SSD for the system drive. If the controller died I wouldn't expect to be able to boot the thing. Only the disks claimed by vSAN seem to be unhappy which makes me think it's something to do with the black magic. Rebuilding the disk group would wipe the disks, wouldn't it?
|
# ? Aug 19, 2017 17:59 |
|
I just P2V'ed our main SQL and file server (running on WS 2012 R2) today. All went well using VMware Converter running on the physical host. This makes 5 machines consolidated onto 1 at my work site. The last machine that could possibly be consolidated is our pfSense instance. I'm pretty sure I could get pfSense virtualized (and the internet agrees it's totally possible to do and performs fine), but there's that lingering thought of putting all the eggs into one basket. Anyone have strong thoughts on virtualizing pfSense? Physical pfSense has been utterly rock solid for us.
|
# ? Aug 19, 2017 21:45 |
|
H2SO4 posted:Yeah it's four spindles, one SSD for VSAN and one small SSD for the system drive. If the controller died I wouldn't expect to be able to boot the thing. Only the disks claimed by vSAN seem to be unhappy which makes me think it's something to do with the black magic. It will wipe the disks, though if you're running vSAN you should have multiple hosts and a replication factor of at least 2 set on your storage policies, so you shouldn't actually lose data.
|
# ? Aug 19, 2017 23:08 |
|
big money big clit posted:It will wipe the disks, though if you're running vSAN you should have multiple hosts and a replication factor of at least 2 set on your storage policies, so you shouldn't actually lose data. Oh for sure, but I already moved the fourth host out of the pool to start the migration back to standard datastores. I've only got four or so inaccessible VMs and I've already moved the pain in the rear end stuff over first so I'm probably just going to try and grab the rest then nuke and pave. All in all it's a great learning experience. Edit: Definitely wasn't a hardware failure. Something related to vSAN itself just straight up barfed and decided the disks on that host were no longer trustworthy. Almost like it was in the middle of a resync operation and lost power/connectivity for too long, making the rest of the pool assume the disks are permanently gone and rebuild whatever data they could between the two of them. It would be nice for some of this stuff to be more obvious, but then again the point of hyperconverged poo poo is that it's all ~~~magic~~~. It's great until something runs out of pixie dust. H2SO4 fucked around with this message at 04:21 on Aug 21, 2017 |
# ? Aug 20, 2017 00:43 |
|
bobfather posted:Anyone have strong thoughts on virtualizing pfSense? Physical pfSense has been utterly rock solid for us. The only reason I P2V poo poo is legacy app stuff that I cannot migrate. Just do a fresh setup and clean house on the most likely old undocumented stuff.
|
# ? Aug 20, 2017 00:46 |
|
Moey posted:The only reason I P2V poo poo is legacy app stuff that I cannot migrate. Seconding this. Whenever I'm allowed to choose between migrating something or rebuilding it I'll always choose the latter (Except for weird legacy poo poo as Moey mentioned). Green-fields all the way baby, if you're gonna do something you may as well make sure that it's done right. bobfather posted:I'm pretty sure I could get pfSense virtualized (and the internet agrees it's totally possible to do and performs fine), but there's that lingering thought of putting all the eggs into one basket. I'd make sure that you can use VMXNET3 adapters with it instead of just bog-standard E1000 adapters. According to the wiki you can deploy VMware Tools after installation which should include the VMXNET3 driver so you can change the VM NICs (Doing that is normally fine as long as the order and MACs don't change, Cisco lets you do it with FirePower virtual appliances): https://doc.pfsense.org/index.php/PfSense_on_VMware_vSphere_/_ESXi#Installing_Open-VM-Tools Regarding "putting all your eggs into one basket" you've already got all your eggs in one basket with physical servers. As long as you configure your VMware stuff properly and practice capacity management then you will be able to stay on top of things. If you require guaranteed service availability then you'll need to look at FT or load-balancing/fail-over using application-level clustering or dedicated NLB appliances (We use Citrix NetScalers which are quite nice). Or at the very least allow things to fail-open using HSRP or WCCP (Depends how you've got pfSense setup, tbh I've never used it before so NFI if it works in routed or transparent mode).
|
# ? Aug 21, 2017 09:51 |
|
You can even specify the MAC in the NIC settings, had to do this for a P2V with an old loving SAP server. Worked fine.
|
# ? Aug 21, 2017 14:29 |
|
Keep in mind that (at the last time I researched this) distributed virtual switches do not like HA stuff like VRRP/HSRP. Something to do with the fact that they don't have a CAM table but use VM metadata to decide where traffic for a given MAC address goes to instead. If anyone else knows differently I'd be interested to hear, since the only other sort of workaround for this behavior I'm aware of is putting everything in promiscuous mode.
|
# ? Aug 22, 2017 19:25 |
|
H2SO4 posted:Keep in mind that (at the last time I researched this) distributed virtual switches do not like HA stuff like VRRP/HSRP. Something to do with the fact that they don't have a CAM table but use VM metadata to decide where traffic for a given MAC address goes to instead. If anyone else knows differently I'd be interested to hear, since the only other sort of workaround for this behavior I'm aware of is putting everything in promiscuous mode.
|
# ? Aug 22, 2017 19:46 |
|
H2SO4 posted:Keep in mind that (at the last time I researched this) distributed virtual switches do not like HA stuff like VRRP/HSRP. Something to do with the fact that they don't have a CAM table but use VM metadata to decide where traffic for a given MAC address goes to instead. If anyone else knows differently I'd be interested to hear, since the only other sort of workaround for this behavior I'm aware of is putting everything in promiscuous mode. Enable forged transmits and MAC change and you should be fine. Basically the only checking VMware does is to make sure the MAC the VM is using matches what's in the VMX to prevent a few different types of attacks. On a distributed vSwitch you can do this on a per-VM basis so you can select each VM in the VRRP group and enable it just for them.
|
# ? Aug 22, 2017 20:03 |
|
VNC is just some garbage backdoor rootkit to rot security. Don't start calling it a feature like as if security is just some joke.
|
# ? Aug 28, 2017 21:35 |
|
Well, it's done. I threw pfSense into a VM with passthrough for the WAN and LAN interfaces, to make it as safe as can be. This is the 4th P2V I was able to do, and the ESXi host is sitting pretty with 8 VMs and room for at least a couple more with no issues. I'm now strongly considering taking a bare metal FreeNAS system, upgrading it with an LSI SAS card and then virtualizing FreeNAS with passthrough on the drives. I'm pretty sure that removes pretty much all danger from virtualizing FreeNAS, and then I have a second ESXi host that can host backups and that I can setup some failover plans with.
|
# ? Aug 28, 2017 23:17 |
|
VMWare is actually VerMinWare it is a platform for virtual lies It's just like that Citrix aka Sit Tricks where they run GoToMyPassword This was not enterprise grade material!
|
# ? Aug 29, 2017 02:53 |
|
Notax posted:VMWare is actually VerMinWare it is a platform for virtual lies do you smell burning toast
|
# ? Aug 29, 2017 02:55 |
|
H2SO4 posted:do you smell burning toast Am I having a stroke?
|
# ? Aug 29, 2017 05:48 |
|
Notax posted:it is a platform for virtual lies This is exactly how I explain virtualization to new people. It's a piece of software that lies to operating systems. It's like in a comedy where the main character has dates with
|
# ? Aug 29, 2017 05:55 |
|
Dr. Arbitrary posted:This is exactly how I explain virtualization to new people. It's a piece of software that lies to operating systems. It's like in a comedy where the main character has dates with Or containers where he somehow pulls it off with all the dates sitting at the same table.
|
# ? Aug 29, 2017 06:35 |
|
How spooked are VMware to have attempted do go all-in on for the second or third time
|
# ? Aug 29, 2017 22:31 |
|
Thanks Ants posted:How spooked are VMware to have attempted do go all-in on for the second or third time They announced this last year at Vmworld, it's just finally coming out now. And it's not really the same as vCloud Air, just an attempt to get a slice of the pie for public cloud workloads. We've actually already got customers who are interested, though that was before pricing was announced.
|
# ? Aug 30, 2017 01:47 |
|
I don't get the whole thing of... let's put EXSi/VMWare V<whatever/> on top of Xen/AWS. Seems awfully redundant.
|
# ? Aug 30, 2017 07:04 |
|
I guess it fulfils the requirements people have for capacity available in global datacenters from a single provider, but all their apps need HA/fault tolerance because they can't cluster.
|
# ? Aug 30, 2017 09:03 |
|
I'm interested in it purely from a DR standpoint since we use a vCloud Air DR strategy. Not having to pay the 6k or whatever a month we currently do, being able to spin up an entire ESX environment with a click of the button would be nice, however I'm not quite sure how we'd actually apply a configuration to the entire thing. Does Terraform work to set up VMware Cloud instances?
|
# ? Aug 30, 2017 09:05 |
|
Tab8715 posted:I don't get the whole thing of... let's put EXSi/VMWare V<whatever/> on top of Xen/AWS. Manage your on prem and cloud environments with the same toolset. VMware skills are cheaper than AWS skills to hire. Let VMware handle availability. No need to refactor applications for cloud architectures.
|
# ? Aug 30, 2017 13:23 |
|
Holy poo poo, I was looking forward to playing with it but not with those prices!
|
# ? Aug 30, 2017 13:50 |
|
Tab8715 posted:I don't get the whole thing of... let's put EXSi/VMWare V<whatever/> on top of Xen/AWS. ESX isn't going on top of anything with AWS. ESX is running directly on the hardware.
|
# ? Aug 30, 2017 17:16 |
|
big money big clit posted:Manage your on prem and cloud environments with the same toolset. VMware skills are cheaper than AWS skills to hire. Let VMware handle availability. No need to refactor applications for cloud architectures. I'd agree that VMware skills are more readily available, cheaper but one only needs to learn AWS IaaS not the whole stack. Someone mentioned this isn't over Xen but it's still a layer over AWS. IaaS doesn't need that much refactoring if you're just straight up moving stuff over but that's not using the to the fullest. I see AWS's angle but it's just seems too complex and inefficient unless you just want out of your datacenter. A DR plan on the other hand... Gucci Loafers fucked around with this message at 04:31 on Sep 1, 2017 |
# ? Sep 1, 2017 04:22 |
|
I really need to branch out and learn some cloud poo poo.
|
# ? Sep 1, 2017 04:28 |
|
Tab8715 posted:I'd agree that VMware skills are more readily available, cheaper but one only needs to learn AWS IaaS not the whole stack. It's ESXi running on directly on the hardware. It's not leveraging anything AWS related except their hardware and facilities. And you absolutely do need to consider your application design if you're moving to AWS. If you move your big monolithic database server into an AWS instance and expect to get the same uptime you would on your own hardware you will probably be not be happy with the results. Many on prem applications are pets. EC2 is made for cattle. If you try to use it for pets you're going to end up spending a lot of money for worse results.
|
# ? Sep 1, 2017 04:56 |
|
big money big clit posted:It's ESXi running on directly on the hardware. It's not leveraging anything AWS related except their hardware and facilities. It's still just virt. and even there a ton of pets that just need reliable host. Which is what AWS is marketing with VMware.
|
# ? Sep 1, 2017 06:58 |
|
|
# ? May 30, 2024 21:18 |
|
Tab8715 posted:It's still just virt. and even there a ton of pets that just need reliable host. Which is what AWS is marketing with VMware. Yes? Are you agreeing with me? The point is that AWS does not actually provide a reliable host for a single EC2 instance, and so is not a great fit for pets, while VMware is.
|
# ? Sep 1, 2017 16:11 |