Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Potato Salad
Oct 23, 2014

nobody cares


YOLOsubmarine posted:

Upgrading systems to maintain vendor support is not scope creep.

:golfclap:

Adbot
ADBOT LOVES YOU

Wicaeed
Feb 8, 2005
Is anyone here using Microsoft ASR with a physical VMware environment?

Our ORG has already bought into ASR, and I've been tasked with automating some key pieces for DR testing, however I'm finding some REALLY loving BIG caveats regarding the presented Automation solutions by Microsoft for VMware & ASR.

If so, how are you dealing with the fact that every solution presented by Microsoft for automation relies on Powershell scripts injected by the Azure Virual Machine Agent in Azure?

Did someone at Microsoft forget that this solution isn't going to work during failback for 100% of VMware customers? Or is Microsoft's goal to trap people in the cloud by only selling them 50% of a DR strategy?

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

Wicaeed posted:

Did someone at Microsoft forget that this solution isn't going to work during failback for 100% of VMware customers? Or is Microsoft's goal to trap people in the cloud by only selling them 50% of a DR strategy?

This is why I've only used it to migrate things to Azure. People who need a DR strategy for their VMware environment that can fail to and back from the cloud is a big driver in sales of VMC on AWS or VMware on Azure.

Empress Brosephine
Mar 31, 2012

by Jeffrey of YOSPOS
Is there any books or video series you folks recommend for learning virtjalization? OP has one but not sure if it's dated or not.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
So I recently acquired a Dell M1000e Bladecenter and a bunch of blade servers, so I'm taking the opprotunity to use all these spare servers to do side by side comparisons of virtualization hypervisors for home lab stuff.

I normally use Xenserver XCP-NG in my lab, but I wanted to try Proxmox, throw in ESXi, and maybe KVM, what other open source hypervisors should I try?

I setup Proxmox this evening. So far, I like it, not quite as nice and intuitive as Xenserver's Management Center, but its still very clean for a Web UI.

SlowBloke
Aug 14, 2017

Empress Brosephine posted:

Is there any books or video series you folks recommend for learning virtjalization? OP has one but not sure if it's dated or not.

I’ve always considered Lowe “mastering vsphere” series of books to be a good starting point. The last one has been written by another author but it is still decent

Actuarial Fables
Jul 29, 2014

Taco Defender
I'm trying to get iscsi multipath working the way I would like to on my Linux host (proxmox), one host to one storage device. In Windows you can configure Round Robin w/ Subset to create a primary group and a standby group should the primary fail entirely - how would one create a similar setup under Linux? I've been able to create a primary group w/ 4 paths and that works fine, but I can't figure out how one would add in a "don't use this unless everything else has failed" path.

Pile Of Garbage
May 28, 2007



Actuarial Fables posted:

I'm trying to get iscsi multipath working the way I would like to on my Linux host (proxmox), one host to one storage device. In Windows you can configure Round Robin w/ Subset to create a primary group and a standby group should the primary fail entirely - how would one create a similar setup under Linux? I've been able to create a primary group w/ 4 paths and that works fine, but I can't figure out how one would add in a "don't use this unless everything else has failed" path.

I'm not familiar with proxmox but is there a specific reason why you need iSCSI Multipath and can't just rely on link aggregation?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

If your storage appliance has two discrete controller heads in active/active then MPIO is your best bet. Allows you to point at the two different initiators and then iSCSI handles reconverging those to the same luns. It's generally only a thing you would need to do on high-end storage arrays that use dual-sas interfaces for true full-path redundancy. Could probably pull off the same without the need for the extra lun layer with NFS4 MPIO support.

Actuarial Fables
Jul 29, 2014

Taco Defender

Pile Of Garbage posted:

I'm not familiar with proxmox but is there a specific reason why you need iSCSI Multipath and can't just rely on link aggregation?

I'm mostly just trying to do stupid things in my lab so that I can understand things better.

BangersInMyKnickers posted:

Could probably pull off the same without the need for the extra lun layer with NFS4 MPIO support.

That was going to be my next project once I finally wrap my head around iscsi multipath configuration.

Pile Of Garbage
May 28, 2007



Actuarial Fables posted:

I'm mostly just trying to do stupid things in my lab so that I can understand things better.

Nice, I can get on-board with that (And explains why I've got such expensive poo poo in my home network). What SAN are you using and does it present multiple target IPs?

Actuarial Fables
Jul 29, 2014

Taco Defender

Pile Of Garbage posted:

Nice, I can get on-board with that (And explains why I've got such expensive poo poo in my home network). What SAN are you using and does it present multiple target IPs?



(it does present multiple target IPs)

My goal is to have the four paths connected through the Lab switch to be the active group and load balancing between themselves, and also include the Admin path as a failover path that is only used if all the Lab paths were to go down (like if I unplugged the lab switch or something). I've been able to get the four lab paths working as a multipath group (or I did until I broke it yesterday), so now I'm trying to figure out how to get the failover path configured. Figure if Windows has the kind of config I'd like (Round Robin w/ Subset) then it should be possible to make something similar under Linux.

e.From what I'm able to gather, I need to set the grouping policy to be based on priority then set a lower priority to the Admin path. This should create two separate path groups. Not sure if I can have a group of just one path though, but I suppose I'll find out.

Actuarial Fables fucked around with this message at 02:40 on Jan 14, 2020

NewFatMike
Jun 11, 2015

I was gifted a GRID K2 at my new job, and I'm not sure what I want to do with it. Maybe set up a VDI for remote access? Anyone have any suggestions for fun stuff to do with it?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

I'd probably dump it on a hyper-v host and do some testing of vgpu scaling

SlowBloke
Aug 14, 2017

NewFatMike posted:

I was gifted a GRID K2 at my new job, and I'm not sure what I want to do with it. Maybe set up a VDI for remote access? Anyone have any suggestions for fun stuff to do with it?

Grid k2 are the last nvidia card to not require licensing for vgpu. Those cards are nice for homelabs with esxi 6.5(last supported version for K2s) as in that case you just need to insert the card, install a vib and you get hardware 3d accelleration on that host.

SlowBloke fucked around with this message at 08:42 on Jan 17, 2020

NewFatMike
Jun 11, 2015

Thanks friends!

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
So, I really want to post this solution in case someone using Xen also runs into it:

I had a CIFS SR for my VHDs, and I added a member to the pool. What I forgot was I was running a cached password for the NFS that had changed. So, when I tried to attach the SR to the new pool member, it kicked BOTH pool members off and threw a generic SMB error on the GUI.

So, I checked the dmesg:
code:
[63785.230730] CIFS VFS: cifs_mount failed w/return code = -13
[63897.747524] Status code returned 0xc000006d STATUS_LOGON_FAILURE
[63897.747546] CIFS VFS: Send error in SessSetup = -13
[63897.747573] CIFS VFS: cifs_mount failed w/return code = -13
[63898.774159] Status code returned 0xc000006d STATUS_LOGON_FAILURE
Oops.

However, Xen XCP and Citrix are not clear on the ability to update the Secret used by the SR, their recommendation is just to destroy and reconnect the SR. I didn't want to do that, so I dug in deeper

Run xe pbd-list
code:
uuid ( RO)                  : 36011f83-18b5-24fb-9646-9ccd006fe87f
             host-uuid ( RO): 097c27c4-ef10-427d-9440-5a20e76781ff
               sr-uuid ( RO): eba24fc3-572d-487c-5605-3d57ca259ffb
         device-config (MRO): server: \\192.168.1.248\VHD; username: xen; password_secret: 3fb287bc-4304-12f7-e8fe-0f134050f419
    currently-attached ( RO): false
Ah, there's a secret you can read with xe secret-list!....in plain text.

xe secret-list
code:

uuid ( RO)     : 3a4ff613-4ee2-7302-6757-56f5cc3d4d17
    value ( RW): *THISISAPASSWORD*
Huh, that value is R/W, so in theory, I can update it. But NOBODY on Citrix or XCP state clearly how.

Do a tab lookup of xe secret-* and you come up with
code:
 xe secret-
secret-create        secret-destroy       secret-list          secret-param-clear   secret-param-get     secret-param-list    secret-param-set
secret-param-set should be the ticket!
code:
XE(1)
=======
:doctype: manpage
:man source:   xe secret-param-set
:man version:  {1}
:man manual:   xe secret-param-set manual

NAME
-----
xe-secret-param-set - Set parameters for a secret

SYNOPSIS
--------
*xe secret-param-set*  uuid=<SECRET UUID> [ <PARAMETER>=<VALUE> ] [ <MAP PARAMETER>:<MAP PARAMETER KEY>=<VALUE> ]

DESCRIPTION
-----------
*xe secret-param-set* sets writable parameters. Use *xe secret-list* and *xe secret-param-list to identify writable parameters (RW, MRW). To append a value to a writable set or map (SRW, MRW) parameter use *xe secret-param-add*
Got it. And the UUID is RW, so we can write to it!
code:
xe secret-param-set uuid=3fb287bc-4304-12f7-e8fe-0f134050f419  value='THISISANEWPASSWORD'
Great! Here comes the kicker: Since this is a Pool, its going to create two or more CIFS SRs, since each SR UUID is unique to the server, so you will have to update the key twice because its going to try to use a cached one.

Again, Its not that this was well hidden or anything, but XCP and Citrix do not provide documentation that I could easily find to repair a SR with a changed password. Their recommendation was generally to delete and recreate, which is a pain.

CommieGIR fucked around with this message at 17:43 on Jan 30, 2020

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful
And I thought working with block storage on Xen was a pain in the rear end and scary (mostly scary when things go wrong). Looks like even file storage repos are a pain in the rear end as well.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
https://twitter.com/CommieGIR/status/1224807652519305219?s=20

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Eh, I get their position on this. Core count increases were moving pretty at a pretty even state until zen. This isn't nearly as bad as when they tried to make licensing based on allocated vram which would have completely killed the over-provisioning savings from virtualizing in the first place. Twice the cores over this threshold, pay twice the socket licensing. I'd be really pissed if I was running 48 and 56 core Xeons though.

Thanks Ants
May 21, 2004

#essereFerrari


SInce we're back to core licensing does that mean 4 socket boxes are going to become popular again?

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
That makes the 48 core epyc pretty pointless, better to get 2 32c or one 64c.

Thanks Ants
May 21, 2004

#essereFerrari


It would be nicer if it was just cores rather than sockets + cores - so a dual 48-core can be covered by three core pack licenses per host rather than having to buy four of them.

Potato Salad
Oct 23, 2014

nobody cares


I made a move to epyc a few years ago specifically for licensing savings

At least i don't have to deal with nearly as much speculative execution horseshit on that infra

Potato Salad
Oct 23, 2014

nobody cares


Perplx posted:

That makes the 48 core epyc pretty pointless, better to get 2 32c or one 64c.

For memory performance, does it really matter whether you have one or two sockets populated on an infinityfabric system?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

BangersInMyKnickers posted:

Eh, I get their position on this. Core count increases were moving pretty at a pretty even state until zen. This isn't nearly as bad as when they tried to make licensing based on allocated vram which would have completely killed the over-provisioning savings from virtualizing in the first place. Twice the cores over this threshold, pay twice the socket licensing. I'd be really pissed if I was running 48 and 56 core Xeons though.

I mean.....considering VMWare is a Dell majority owned company, and Dell is backing Xeon over Epyc (and let's be honest, Epyc is doing it better than Xeon as far as density), I get why VMWare is doing it, but it seems like an attack on AMD rather than just adjusting for shrinking socket counts due to rising core counts.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Potato Salad posted:

For memory performance, does it really matter whether you have one or two sockets populated on an infinityfabric system?

If your application is numa aware and can realistically scale to 64+ threads then that second socket is going to double your memory bandwidth while keeping latency relatively low. Which will only matter if you are limited by memory bandwidth. Bandwidth on the socket interconnect is limited and latency is much higher than addressing everything on the local memory controllers so if the application can't scale/numa well and gets its threads split across both sockets then it will either run the same or worse.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
So Dell has dual SD Cards for hosting the hypervisor in places where you are doing remote logging.

Except in my M915s case, the redundant SD card didn't work. Oops. Oh well, back to hosting the hypervisor on a RAID1 SAS. I fully suspect it wouldn't work well, but since I had HA servers, I figured I'd give it a shot.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

CommieGIR posted:

So Dell has dual SD Cards for hosting the hypervisor in places where you are doing remote logging.

Except in my M915s case, the redundant SD card didn't work. Oops. Oh well, back to hosting the hypervisor on a RAID1 SAS. I fully suspect it wouldn't work well, but since I had HA servers, I figured I'd give it a shot.

Just change the syslog location to a datastore or a syslog server.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Moey posted:

Just change the syslog location to a datastore or a syslog server.

Its more the issue of the redundant SD cards didn't fail over, or they failed both at the same time.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

CommieGIR posted:

Its more the issue of the redundant SD cards didn't fail over, or they failed both at the same time.

Ha, writing logs to em may do that. I've never had both die at once tho.

ESXi will keep chuggin along running in memory without it's boot drive, just won't reboot or mount the VMware tools ISO on guests.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Moey posted:

Ha, writing logs to em may do that. I've never had both die at once tho.

ESXi will keep chuggin along running in memory without it's boot drive, just won't reboot or mount the VMware tools ISO on guests.

Not really sure, because they were logging to an ELK, so there shouldn't have been excess writes. Oh well, its recovered on the RAID1 and back in operation.

Thanks Ants
May 21, 2004

#essereFerrari


Just boot off the SAN :catdrugs:

some kinda jackal
Feb 25, 2003

 
 
Oops, that reminds me -- I've been running ESXi off redundant SD's on my 620 in my homelab for like .. two years now, and haven't sent logging off-machine yet. I feel like I'm just playing with fire at this point.

I mean granted it doesn't see a lot of use, but I expect that it still logs a non-insignificant amount of random garbage that I never really look at. I should just feed logs to the void.

SlowBloke
Aug 14, 2017

Martytoof posted:

Oops, that reminds me -- I've been running ESXi off redundant SD's on my 620 in my homelab for like .. two years now, and haven't sent logging off-machine yet. I feel like I'm just playing with fire at this point.

I mean granted it doesn't see a lot of use, but I expect that it still logs a non-insignificant amount of random garbage that I never really look at. I should just feed logs to the void.

Depending on the sd size it might keep a minimal part of the logs and discard the rest. Same on small usb sticks.

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

Martytoof posted:

I feel like I'm just playing with fire at this point.

Our central office ESXi server was configured to write logs to its SD cards when it went belly up on a Friday, and it only had NBD support. That was a touchy weekend.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

CommieGIR posted:

So Dell has dual SD Cards for hosting the hypervisor in places where you are doing remote logging.

Except in my M915s case, the redundant SD card didn't work. Oops. Oh well, back to hosting the hypervisor on a RAID1 SAS. I fully suspect it wouldn't work well, but since I had HA servers, I figured I'd give it a shot.

Our VDI guys have also used the Dual SD setup on HPEs and it was also problematic, but I don't remember the details.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I've really never had issues with dual SD.

Before that I would shove a thumb drive in our hosts, those would cook themselves in 2-3 years normally.

May go the :catdrugs: route and test booting via SAN. Unsure of that will end up too complex to try and explain to any of my coworkers tho.

Zorak of Michigan
Jun 10, 2006

For our next round of host purchases, I'm pushing for going with dual SATA SSDs. It's a trivial price delta compared to the cost of the host, and ought to be much more stable.

Adbot
ADBOT LOVES YOU

SlowBloke
Aug 14, 2017
We have started using long endurance SD/microsd from sandisk to cover embedded hypervisor cases, conventional sd would get fried every two-three years.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply