|
adorai posted:I thought TPS was gone now. Still there, just unlikely to get invoked.
|
# ¿ Mar 11, 2016 23:36 |
|
|
# ¿ May 14, 2024 15:31 |
|
Wicaeed posted:Well I'd love to read the article, but the VMware KB site is (yet again) down. It may actually just be that you've gotten the cookie of death from their website. I find that sometimes it'll tell me its down and clearing my cache or a different browser sorts it out.
|
# ¿ Mar 31, 2016 04:21 |
|
Wicaeed posted:School me on the VMware SDN options. NSX can provide you overlays via VXLAN, load balancers, firewalls, and routers through a combination of kernel modules and virtual appliances. It integrates with a couple cloud management platforms out there. I've got a number of customers that basically just "rubber stamp" out copies of their network for testing, QA and even to eventually roll it to production. If you don't need to provision a lot of network services then you can probably just look at something like embotics or some other cloud management tool.
|
# ¿ Jun 6, 2016 05:25 |
|
I'd ask if they see themselves needing to frequently provision networks or if you can just hand them a pool of networks that they have free reign over to select IPs/provision into. Depending on the applications/what the developers need you probably don't need to provision on-demand networks but if you have a use case that requires it then NSX would be worth looking into. VMware actually very recently dropped the pricing on the bits of NSX that would provide this capability https://www.vmware.com/products/nsx/compare.html (it's in the lowest tier.) I don't think they've done a good job communicating this though. I just spent the better part of 20 minutes trying to figure out if I could even tell you that. Basically the edition you need is going to be ~2000 per socket list (as opposed to the previous 6000 a socket list.) Adorai, it may be worth revisiting if the lowest tier addresses your problems. This could allow you the flexibility to provide 'VPC-like' networking, floating IPs, etc. In fact you could potentially couple it with VIO (VMware integrated openstack) and have a lot of the same APIs/interfaces. If they're asking "why vmware" then the answer is pretty much "my job as Wicaeed is to support the poo poo you build and I need to understand the infrastructure to do it!" I would basically sit down and go over a few scenarios that will be common for them. Good odds that you can make Terraform work for a lot of it using the default vmware networking even if they're doing all the fancy poo poo hashicorp is selling.
|
# ¿ Jun 7, 2016 02:58 |
|
mayodreams posted:I'll share my NTP story that I use as an example of what not to do for interviews. You basically pointed out a major leadership failure so he threw you under the bus. It sounds like its a good thing though because that sounds like a terrible place to work!
|
# ¿ Aug 18, 2016 05:50 |
|
Wicaeed posted:Anyone here actually do Netflow/sFlow logging across an entire VDS or Cisco UCS deployment? You probably want to look at vRealize Network Insight. I want to say its 1500 per socket and it will give you total visibility of whats going on in the VDS and the UCS. edit: this came from the Arkin acquisition.
|
# ¿ Sep 20, 2016 18:37 |
|
BangersInMyKnickers posted:I've been working with VMware support for a few months now on a really annoying issue with their LACP implementation. The first part is that vdSwitches with LACP default to long timeouts (90s) which is not exactly desirable for failed link detection on the storage fabric. The switches default to short timeouts (3s) and I'm forced to just live with the mismatch. You can manually configure each host to run the vdSwitch with short timeouts over SSH but there doesn't seem to be a way to change the vdSwitch definition so it resets back to long on reboot. This ends up manifesting itself at points in the night when disk traffic is extremely light and the host goes a few seconds without sending disk requests. The only thing moving at that point are the lacp pdus but the host is sending them out so slow that the switch thinks the link dies so it bounce, the host freaks out and re-establishes it, and your logs get flooded with nasty looking stuff. Thats been a big problem as of late. That said are you using iSCSI? If so stop using LACP and swap to MPIO. Also make sure your switch and hosts agree on your load balancing hash. Lots of physical switches default to src-mac and need to be manually changed. Also if going to multiple switches make sure they support MLAG or VPC or something equivalent and that it's properly configured. Should be able to adjust the timers on the switch side and link failures should still be detected instantly.
|
# ¿ Nov 11, 2016 07:29 |
|
Distributed vSwitch can honor/set 802.1p values. Turn on network IO control I believe and you should see it in the dvSwitch settings somewhere. What's the Arista sending you the traffic with? How is the machine freaking out?
|
# ¿ Nov 22, 2016 09:32 |
|
MC Fruit Stripe posted:It has been so long since I've done it that I can't even think of the answer clearly - what functions do I lose if I have ESXi installed with access to only local storage? We're giving up what, vMotion, HA, DRS? (e: FT...) HA and HA related things. You can do svmotion and regular vmotion without shared storage now as of 5.5 if I recall.
|
# ¿ Apr 27, 2017 05:17 |
|
H2SO4 posted:Keep in mind that (at the last time I researched this) distributed virtual switches do not like HA stuff like VRRP/HSRP. Something to do with the fact that they don't have a CAM table but use VM metadata to decide where traffic for a given MAC address goes to instead. If anyone else knows differently I'd be interested to hear, since the only other sort of workaround for this behavior I'm aware of is putting everything in promiscuous mode. Enable forged transmits and MAC change and you should be fine. Basically the only checking VMware does is to make sure the MAC the VM is using matches what's in the VMX to prevent a few different types of attacks. On a distributed vSwitch you can do this on a per-VM basis so you can select each VM in the VRRP group and enable it just for them.
|
# ¿ Aug 22, 2017 20:03 |
|
Potato Salad posted:Lacp exists to cause you more pain and suffering and production losses than the time it takes to set up channels/aggregation up by hand, every single time What are people actually getting wrong? There’s not a whole lot to setup on LACP beyond timers (which most platforms only have 1 option) and if the interfaces are going to actively send LACP PDUs or not. I’ve probably seen more people get static link aggregation wrong where maybe one side has the wrong load distribution algorithm set. I think the dvswitch itself supports something like 26 different options of which not all exist on all switching platforms. That said I almost never bother with link aggregation to hypervisors anymore. 10 gig is cheap and source based load distribution doesn’t require upstream switch configuration.
|
# ¿ May 8, 2020 17:51 |
|
SlowBloke posted:Nsx-V is their out of the box solution, which might be a bit overkill if vcsa was too much overhead for you. NSX-V is EOL you should be looking at NSX-T now which is a complete rewrite and much much much better. Given the use case though just install a router VM.
|
# ¿ May 14, 2021 23:35 |
|
|
# ¿ May 14, 2024 15:31 |
|
Zorak of Michigan posted:God save us all but OVM can actually solve licensing problems with Oracle RDBMS. It's one of the few on-prem solutions for which they allow sub-capacity licensing. I've pondered it but concluded that the extra time and energy spent dealing with it would exceed the the cost of just eating full capacity licensing. just build a separate vsphere/whatever hypervisor cluster off to the side and license those sockets. You don't need a separate vcenter or air gap despite what sales says and if you push back they'll eventually cave. Also they only license for allocated capacity in cloud providers too so that's an option as well.
|
# ¿ Jul 19, 2022 21:48 |