|
YOLOsubmarine posted:Upgrading systems to maintain vendor support is not scope creep.
|
# ? Nov 21, 2019 05:45 |
|
|
# ? May 9, 2024 18:12 |
|
Is anyone here using Microsoft ASR with a physical VMware environment? Our ORG has already bought into ASR, and I've been tasked with automating some key pieces for DR testing, however I'm finding some REALLY loving BIG caveats regarding the presented Automation solutions by Microsoft for VMware & ASR. If so, how are you dealing with the fact that every solution presented by Microsoft for automation relies on Powershell scripts injected by the Azure Virual Machine Agent in Azure? Did someone at Microsoft forget that this solution isn't going to work during failback for 100% of VMware customers? Or is Microsoft's goal to trap people in the cloud by only selling them 50% of a DR strategy?
|
# ? Jan 7, 2020 22:13 |
|
Wicaeed posted:Did someone at Microsoft forget that this solution isn't going to work during failback for 100% of VMware customers? Or is Microsoft's goal to trap people in the cloud by only selling them 50% of a DR strategy? This is why I've only used it to migrate things to Azure. People who need a DR strategy for their VMware environment that can fail to and back from the cloud is a big driver in sales of VMC on AWS or VMware on Azure.
|
# ? Jan 11, 2020 18:29 |
|
Is there any books or video series you folks recommend for learning virtjalization? OP has one but not sure if it's dated or not.
|
# ? Jan 12, 2020 01:25 |
|
So I recently acquired a Dell M1000e Bladecenter and a bunch of blade servers, so I'm taking the opprotunity to use all these spare servers to do side by side comparisons of virtualization hypervisors for home lab stuff. I normally use Xenserver XCP-NG in my lab, but I wanted to try Proxmox, throw in ESXi, and maybe KVM, what other open source hypervisors should I try? I setup Proxmox this evening. So far, I like it, not quite as nice and intuitive as Xenserver's Management Center, but its still very clean for a Web UI.
|
# ? Jan 12, 2020 04:49 |
|
Empress Brosephine posted:Is there any books or video series you folks recommend for learning virtjalization? OP has one but not sure if it's dated or not. I’ve always considered Lowe “mastering vsphere” series of books to be a good starting point. The last one has been written by another author but it is still decent
|
# ? Jan 12, 2020 11:44 |
|
I'm trying to get iscsi multipath working the way I would like to on my Linux host (proxmox), one host to one storage device. In Windows you can configure Round Robin w/ Subset to create a primary group and a standby group should the primary fail entirely - how would one create a similar setup under Linux? I've been able to create a primary group w/ 4 paths and that works fine, but I can't figure out how one would add in a "don't use this unless everything else has failed" path.
|
# ? Jan 13, 2020 11:37 |
|
Actuarial Fables posted:I'm trying to get iscsi multipath working the way I would like to on my Linux host (proxmox), one host to one storage device. In Windows you can configure Round Robin w/ Subset to create a primary group and a standby group should the primary fail entirely - how would one create a similar setup under Linux? I've been able to create a primary group w/ 4 paths and that works fine, but I can't figure out how one would add in a "don't use this unless everything else has failed" path. I'm not familiar with proxmox but is there a specific reason why you need iSCSI Multipath and can't just rely on link aggregation?
|
# ? Jan 13, 2020 16:26 |
|
If your storage appliance has two discrete controller heads in active/active then MPIO is your best bet. Allows you to point at the two different initiators and then iSCSI handles reconverging those to the same luns. It's generally only a thing you would need to do on high-end storage arrays that use dual-sas interfaces for true full-path redundancy. Could probably pull off the same without the need for the extra lun layer with NFS4 MPIO support.
|
# ? Jan 13, 2020 16:37 |
|
Pile Of Garbage posted:I'm not familiar with proxmox but is there a specific reason why you need iSCSI Multipath and can't just rely on link aggregation? I'm mostly just trying to do stupid things in my lab so that I can understand things better. BangersInMyKnickers posted:Could probably pull off the same without the need for the extra lun layer with NFS4 MPIO support. That was going to be my next project once I finally wrap my head around iscsi multipath configuration.
|
# ? Jan 13, 2020 18:02 |
|
Actuarial Fables posted:I'm mostly just trying to do stupid things in my lab so that I can understand things better. Nice, I can get on-board with that (And explains why I've got such expensive poo poo in my home network). What SAN are you using and does it present multiple target IPs?
|
# ? Jan 13, 2020 18:05 |
|
Pile Of Garbage posted:Nice, I can get on-board with that (And explains why I've got such expensive poo poo in my home network). What SAN are you using and does it present multiple target IPs? (it does present multiple target IPs) My goal is to have the four paths connected through the Lab switch to be the active group and load balancing between themselves, and also include the Admin path as a failover path that is only used if all the Lab paths were to go down (like if I unplugged the lab switch or something). I've been able to get the four lab paths working as a multipath group (or I did until I broke it yesterday), so now I'm trying to figure out how to get the failover path configured. Figure if Windows has the kind of config I'd like (Round Robin w/ Subset) then it should be possible to make something similar under Linux. e.From what I'm able to gather, I need to set the grouping policy to be based on priority then set a lower priority to the Admin path. This should create two separate path groups. Not sure if I can have a group of just one path though, but I suppose I'll find out. Actuarial Fables fucked around with this message at 02:40 on Jan 14, 2020 |
# ? Jan 13, 2020 19:44 |
|
I was gifted a GRID K2 at my new job, and I'm not sure what I want to do with it. Maybe set up a VDI for remote access? Anyone have any suggestions for fun stuff to do with it?
|
# ? Jan 16, 2020 20:05 |
|
I'd probably dump it on a hyper-v host and do some testing of vgpu scaling
|
# ? Jan 16, 2020 22:24 |
|
NewFatMike posted:I was gifted a GRID K2 at my new job, and I'm not sure what I want to do with it. Maybe set up a VDI for remote access? Anyone have any suggestions for fun stuff to do with it? Grid k2 are the last nvidia card to not require licensing for vgpu. Those cards are nice for homelabs with esxi 6.5(last supported version for K2s) as in that case you just need to insert the card, install a vib and you get hardware 3d accelleration on that host. SlowBloke fucked around with this message at 08:42 on Jan 17, 2020 |
# ? Jan 17, 2020 08:34 |
|
Thanks friends!
|
# ? Jan 24, 2020 23:57 |
|
So, I really want to post this solution in case someone using Xen also runs into it: I had a CIFS SR for my VHDs, and I added a member to the pool. What I forgot was I was running a cached password for the NFS that had changed. So, when I tried to attach the SR to the new pool member, it kicked BOTH pool members off and threw a generic SMB error on the GUI. So, I checked the dmesg: code:
However, Xen XCP and Citrix are not clear on the ability to update the Secret used by the SR, their recommendation is just to destroy and reconnect the SR. I didn't want to do that, so I dug in deeper Run xe pbd-list code:
xe secret-list code:
Do a tab lookup of xe secret-* and you come up with code:
code:
code:
Again, Its not that this was well hidden or anything, but XCP and Citrix do not provide documentation that I could easily find to repair a SR with a changed password. Their recommendation was generally to delete and recreate, which is a pain. CommieGIR fucked around with this message at 17:43 on Jan 30, 2020 |
# ? Jan 30, 2020 17:15 |
|
And I thought working with block storage on Xen was a pain in the rear end and scary (mostly scary when things go wrong). Looks like even file storage repos are a pain in the rear end as well.
|
# ? Feb 3, 2020 22:30 |
|
https://twitter.com/CommieGIR/status/1224807652519305219?s=20
|
# ? Feb 4, 2020 23:46 |
|
Eh, I get their position on this. Core count increases were moving pretty at a pretty even state until zen. This isn't nearly as bad as when they tried to make licensing based on allocated vram which would have completely killed the over-provisioning savings from virtualizing in the first place. Twice the cores over this threshold, pay twice the socket licensing. I'd be really pissed if I was running 48 and 56 core Xeons though.
|
# ? Feb 5, 2020 15:39 |
|
SInce we're back to core licensing does that mean 4 socket boxes are going to become popular again?
|
# ? Feb 5, 2020 17:14 |
|
That makes the 48 core epyc pretty pointless, better to get 2 32c or one 64c.
|
# ? Feb 5, 2020 17:32 |
|
It would be nicer if it was just cores rather than sockets + cores - so a dual 48-core can be covered by three core pack licenses per host rather than having to buy four of them.
|
# ? Feb 5, 2020 17:57 |
|
I made a move to epyc a few years ago specifically for licensing savings At least i don't have to deal with nearly as much speculative execution horseshit on that infra
|
# ? Feb 5, 2020 18:04 |
|
Perplx posted:That makes the 48 core epyc pretty pointless, better to get 2 32c or one 64c. For memory performance, does it really matter whether you have one or two sockets populated on an infinityfabric system?
|
# ? Feb 5, 2020 18:05 |
|
BangersInMyKnickers posted:Eh, I get their position on this. Core count increases were moving pretty at a pretty even state until zen. This isn't nearly as bad as when they tried to make licensing based on allocated vram which would have completely killed the over-provisioning savings from virtualizing in the first place. Twice the cores over this threshold, pay twice the socket licensing. I'd be really pissed if I was running 48 and 56 core Xeons though. I mean.....considering VMWare is a Dell majority owned company, and Dell is backing Xeon over Epyc (and let's be honest, Epyc is doing it better than Xeon as far as density), I get why VMWare is doing it, but it seems like an attack on AMD rather than just adjusting for shrinking socket counts due to rising core counts.
|
# ? Feb 5, 2020 18:13 |
|
Potato Salad posted:For memory performance, does it really matter whether you have one or two sockets populated on an infinityfabric system? If your application is numa aware and can realistically scale to 64+ threads then that second socket is going to double your memory bandwidth while keeping latency relatively low. Which will only matter if you are limited by memory bandwidth. Bandwidth on the socket interconnect is limited and latency is much higher than addressing everything on the local memory controllers so if the application can't scale/numa well and gets its threads split across both sockets then it will either run the same or worse.
|
# ? Feb 5, 2020 18:24 |
|
So Dell has dual SD Cards for hosting the hypervisor in places where you are doing remote logging. Except in my M915s case, the redundant SD card didn't work. Oops. Oh well, back to hosting the hypervisor on a RAID1 SAS. I fully suspect it wouldn't work well, but since I had HA servers, I figured I'd give it a shot.
|
# ? Feb 7, 2020 01:50 |
|
CommieGIR posted:So Dell has dual SD Cards for hosting the hypervisor in places where you are doing remote logging. Just change the syslog location to a datastore or a syslog server.
|
# ? Feb 7, 2020 02:26 |
|
Moey posted:Just change the syslog location to a datastore or a syslog server. Its more the issue of the redundant SD cards didn't fail over, or they failed both at the same time.
|
# ? Feb 7, 2020 02:38 |
|
CommieGIR posted:Its more the issue of the redundant SD cards didn't fail over, or they failed both at the same time. Ha, writing logs to em may do that. I've never had both die at once tho. ESXi will keep chuggin along running in memory without it's boot drive, just won't reboot or mount the VMware tools ISO on guests.
|
# ? Feb 7, 2020 02:48 |
|
Moey posted:Ha, writing logs to em may do that. I've never had both die at once tho. Not really sure, because they were logging to an ELK, so there shouldn't have been excess writes. Oh well, its recovered on the RAID1 and back in operation.
|
# ? Feb 7, 2020 03:37 |
|
Just boot off the SAN
|
# ? Feb 7, 2020 13:08 |
|
Oops, that reminds me -- I've been running ESXi off redundant SD's on my 620 in my homelab for like .. two years now, and haven't sent logging off-machine yet. I feel like I'm just playing with fire at this point. I mean granted it doesn't see a lot of use, but I expect that it still logs a non-insignificant amount of random garbage that I never really look at. I should just feed logs to the void.
|
# ? Feb 7, 2020 13:30 |
|
Martytoof posted:Oops, that reminds me -- I've been running ESXi off redundant SD's on my 620 in my homelab for like .. two years now, and haven't sent logging off-machine yet. I feel like I'm just playing with fire at this point. Depending on the sd size it might keep a minimal part of the logs and discard the rest. Same on small usb sticks.
|
# ? Feb 7, 2020 16:46 |
|
Martytoof posted:I feel like I'm just playing with fire at this point. Our central office ESXi server was configured to write logs to its SD cards when it went belly up on a Friday, and it only had NBD support. That was a touchy weekend.
|
# ? Feb 7, 2020 17:23 |
|
CommieGIR posted:So Dell has dual SD Cards for hosting the hypervisor in places where you are doing remote logging. Our VDI guys have also used the Dual SD setup on HPEs and it was also problematic, but I don't remember the details.
|
# ? Feb 7, 2020 18:18 |
|
I've really never had issues with dual SD. Before that I would shove a thumb drive in our hosts, those would cook themselves in 2-3 years normally. May go the route and test booting via SAN. Unsure of that will end up too complex to try and explain to any of my coworkers tho.
|
# ? Feb 13, 2020 05:06 |
|
For our next round of host purchases, I'm pushing for going with dual SATA SSDs. It's a trivial price delta compared to the cost of the host, and ought to be much more stable.
|
# ? Feb 13, 2020 05:25 |
|
|
# ? May 9, 2024 18:12 |
|
We have started using long endurance SD/microsd from sandisk to cover embedded hypervisor cases, conventional sd would get fried every two-three years.
|
# ? Feb 13, 2020 09:52 |