Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zorak of Michigan
Jun 10, 2006

Wow, that's depressing. Ease and simplicity of upgrades are a huge part of their sales pitch.

Adbot
ADBOT LOVES YOU

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

GrandMaster posted:

Yeah it's a very good point, obviously failover stuff was all tested during commissioning, and we have multiple non-prod environments for all the critical apps, but it's all on the same hardware. The higher-ups wanted ONE BIG CLUSTER and completely disregarded advice to split non prod environments onto physically separate hardware like we used to have pre-hci.

On another note, has anyone had a positive experience with the vxrail VCF upgrades? It was sold to us as "one click will upgrade the entire environment" but in reality it's "one support case will be required for every single component, which inevitably fails during the upgrade process". From memory it's been about 15 support cases and 3 months required for our management cluster upgrade and it's not even finished yet.

God I hate vxrail haha

I've heard nothing good sadly. Hopefully with vSphere lifecycle manager (in 7.0) it'll be come better/easier

devmd01
Mar 7, 2006

Elektronik
Supersonik
I know hyperconverged certainly has its use cases and theoretically can simplify troubleshooting since you have a verified stack and a single vendor, but I’m just not convinced by it yet especially with moving as much as we are to SaaS or IaaS on Azure for tier 0/1 apps.

We’re on UCS/Pure for our VMware clusters and couldn’t be happier, it’s a pretty drat solid combination.

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

devmd01 posted:

I know hyperconverged certainly has its use cases and theoretically can simplify troubleshooting since you have a verified stack and a single vendor, but I’m just not convinced by it yet especially with moving as much as we are to SaaS or IaaS on Azure for tier 0/1 apps.

We’re on UCS/Pure for our VMware clusters and couldn’t be happier, it’s a pretty drat solid combination.

My previous work moved to vSAN (before I started there) because "traditional SANs are too complicated and we don't want to have a "Storage Admin"". They were really used to the old school SANs and didn't realize that a lot of the newer generation stuff (like Pure) you no longer need to even know what the backend storage is doing from a RAID perspective or to optimize resilience and performance, you basically present LUN and off you go.

Zorak of Michigan
Jun 10, 2006

My only problem with Pure is that the moment we went to year-to-year renewal instead of the three-year they wanted us to sign, my assigned customer engineer suddenly got real hard to get hold of. They continue to just work and do everything we want, and when we open a case for proactive support they're right there to help, so I'm not reaaaaal upset, just miffed.

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost
So I spent 8 hours yesterday struggling with the oVirt installation only to realize that I fundamentally disagree with the external DNS requirement of the platform. The documentation specifically states that it doesn't support DNS server virtual machines and installation requires a standalone DNS server. Anyway, now I'm trying to decide on which hypervisor/management interface to use on my home network. My main goals are to run pfsense and Plex but I also support a vmware environment at work and would like to find a low cost alternative. Right now I'm trying to decide between Hyper-V, XCP-NG, and KVM/Proxmox. I guess this question probably gets asked often but does anybody have any suggestions?

Pikehead
Dec 3, 2006

Looking for WMDs, PM if you have A+ grade stuff
Fun Shoe

devmd01 posted:

I know hyperconverged certainly has its use cases and theoretically can simplify troubleshooting since you have a verified stack and a single vendor, but I’m just not convinced by it yet especially with moving as much as we are to SaaS or IaaS on Azure for tier 0/1 apps.

We’re on UCS/Pure for our VMware clusters and couldn’t be happier, it’s a pretty drat solid combination.

To me, hyperconverged is for a couple of situations (on prem)
* where there's a minimal footprint required but redundancy at the host/storage level is required
* the business doesn't want to deal with seperate storage

vSAN seems alright to me, Nutanix is fine (but not vmware on nutanix).
I prefer the traditional compute + SAN for onprem, but that's what I learned on.

Potato Salad
Oct 23, 2014

nobody cares


how do you get onprem san with meaningful redundancy for less than a quarter of a million dollars though

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

I have 35 TB of active/active storage and I think we paid $100k. Some of it is rotational though. I guess the secret is to not need high performance?

Potato Salad
Oct 23, 2014

nobody cares


Happiness Commando posted:

I have 35 TB of active/active storage and I think we paid $100k. Some of it is rotational though. I guess the secret is to not need high performance?

One of my clients has five vSAN nodes for $150,000 with similar capacity, all on SSDs. That includes compute and licensing, not just storage.

I kind of don't get it?

Hughmoris
Apr 21, 2007
Let's go to the abyss!
I use Virtualbox for an Ubuntu VM to dabble in programming and media. I've been having host RAM issues lately (i think its RAM) and I just tried to launch my ubuntu VM and the .vdi is inaccessible. After some googling and research, I've found that my .vdi file is 0KB. I'm guessing that means its corrupted.

Is there any magic I can use to try repairing the .vdi file, or bring it to life long enough to get some media out of the VM?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Potato Salad posted:

how do you get onprem san with meaningful redundancy for less than a quarter of a million dollars though

Dude, storage is relatively cheap, and most SAN gear is 25GB+ fiber minimum now.

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost

RVWinkle posted:

So I spent 8 hours yesterday struggling with the oVirt installation only to realize that I fundamentally disagree with the external DNS requirement of the platform. The documentation specifically states that it doesn't support DNS server virtual machines and installation requires a standalone DNS server. Anyway, now I'm trying to decide on which hypervisor/management interface to use on my home network. My main goals are to run pfsense and Plex but I also support a vmware environment at work and would like to find a low cost alternative. Right now I'm trying to decide between Hyper-V, XCP-NG, and KVM/Proxmox. I guess this question probably gets asked often but does anybody have any suggestions?

I ended up testing Hyper-V and it has similar issues that oVirt has where it really wants a domain controller before it will connect to a network share. Next, I tried out Proxmox and I'm seriously impressed! The UI just makes sense to me and I was able to configure everything without issues. Now I just need to create an LXC for docker and Portainer because that feels like the right thing to do.

Hughlander
May 11, 2005

RVWinkle posted:

I ended up testing Hyper-V and it has similar issues that oVirt has where it really wants a domain controller before it will connect to a network share. Next, I tried out Proxmox and I'm seriously impressed! The UI just makes sense to me and I was able to configure everything without issues. Now I just need to create an LXC for docker and Portainer because that feels like the right thing to do.

Running it as an LXC is recommended, but in a recent rebuild of some things I moved it straight to the hypervisor so I didn't need to pass in mount points / restart the XLC if new ZFS shares were created. If you go down that route you'll need to use their nested flag and I think that's it, I can't recall since it's all done by ansible now.

I do love Proxmox though, have a 2 node cluster one being a 100TB NAS other being dedicated docker/VM host with 128GB memory.

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost

Hughlander posted:

Running it as an LXC is recommended, but in a recent rebuild of some things I moved it straight to the hypervisor so I didn't need to pass in mount points / restart the XLC if new ZFS shares were created. If you go down that route you'll need to use their nested flag and I think that's it, I can't recall since it's all done by ansible now.

I do love Proxmox though, have a 2 node cluster one being a 100TB NAS other being dedicated docker/VM host with 128GB memory.

Haha, well I guess you won't be surprised to find that I spent quite a bit of time struggling with mount points yesterday. I did it the hard way starting with how I expected it work and then slowly discovering how it actually works. Today I'm going to mess with bind mounts and pray I don't get stuck in permissions hell.

Hughlander
May 11, 2005

RVWinkle posted:

Haha, well I guess you won't be surprised to find that I spent quite a bit of time struggling with mount points yesterday. I did it the hard way starting with how I expected it work and then slowly discovering how it actually works. Today I'm going to mess with bind mounts and pray I don't get stuck in permissions hell.

I haven't published it to github yet, but I wrote a set of ansible scripts that help with this:

- Destroy existing file servers, cleaning up ssh keys on the two proxmox hosts
- Create one new LXC host on each proxmox host to serve files over cifs/nfs
- Iterate over all ZFS datasets and add them to mountpoints for the fileservers
- Ignore a few like /rpool/ROOT
- Give aliases to others like /rpool/subvol-115-disk-0: /nodes/abba
- set up etc/exports to the local subnets
- set up smb.conf with a local workgroup
- Create users for smb to connect based on the inventory file
- export each of the mountpoints over smb
- Go to the proxmox servers and edit /etc/fstab to mount all connections not on this machine.

So if I do add a mountpoint to zfs, I just destroy the fileserver for that host, then rerun the create fileservers. I also haven't pushed my changes to proxmox ansible module yet that handles the nfs nesting but I have that locally as well.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Potato Salad posted:

One of my clients has five vSAN nodes for $150,000 with similar capacity, all on SSDs. That includes compute and licensing, not just storage.

I kind of don't get it?

If you have workloads that deduplicate and compress reasonably well (which is most general purpose virtual workloads) then you can get a good bit of all flash capacity for relatively cheap.

VSAN can also be pretty cost effective but it is strictly worse for raw:usable capacity in most every situation than external arrays and there are performance implications of running erasure coding, dedupe and compression on the platform. It’s also very feature poor as storage, with no zero impact snapshot capabilities, no native replication, no zero space clones, etc...it’s pretty bare bones storage. I actually think that’s fine and those operations are better handled above the storage layer, but there are still good reasons why a customer might want to use those features and vSAN won’t fit the bill.

YOLOsubmarine fucked around with this message at 22:15 on Jul 4, 2020

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost
Welp, it looks like I'm stuck on permissions as I anticipated. Basically, I want to mount a CIFS share as R/W in an LXC for a transmission docker container.

On the pve host, I have the following in /etc/fstab:
//x.x.x.x/Downloads /mnt/Downloads/ cifs credentials=/etc/win-credentials 0 0

It mounts correctly on the pve host, can read and write, and shows root as the owner. Next, I add it into /etc/pve/lxc/200.conf:
mp0: /mnt/Downloads,mp=/mnt/Downloads

It mounts as read only in the lxc container with keyctl=1,nesting=1 and unprivileged: 1.

Finally, I follow the lxc.idmap instructions and everything goes to hell:
https://pve.proxmox.com/wiki/Unprivileged_LXC_containers#Using_local_directory_bind_mount_points

If I follow the directions exactly, it fails after adding the second set of lxc.idmap commands and the container will no longer start with the following error: newuidmap: write to uid_map failed: Invalid argument

I'm trying to follow the instructions exactly just as a proof of concept but I think I just need to map root to root. This seems like it's defeating the purpose of an unprivileged container but I'm not sure if there's another way to mount a CIFS share.

By the way, the file share works fine in privileged mode but then docker fails with an AppArmor permission denied error.

Any suggestions would be appreciated.

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost
I spent some more time thinking about LXC and security and I'm not sure it makes sense to run containers right on the hypervisor. Even if you run unprivileged, you end up with issues like the inability to snapshot when using nfs mounts. I ended up deploying a RancherOS vm instead. I used it a while back when they tried to make it the official FreeNAS docker solution and it's pretty cool how the whole OS is defined by composer files.

Potato Salad
Oct 23, 2014

nobody cares


YOLOsubmarine posted:

It’s also very feature poor as storage, with no zero impact snapshot capabilities, no native replication, no zero space clones, etc...it’s pretty bare bones storage

aaaaaaaah I wasn't thinking about this right

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

RVWinkle posted:

I spent some more time thinking about LXC and security and I'm not sure it makes sense to run containers right on the hypervisor. Even if you run unprivileged, you end up with issues like the inability to snapshot when using nfs mounts. I ended up deploying a RancherOS vm instead. I used it a while back when they tried to make it the official FreeNAS docker solution and it's pretty cool how the whole OS is defined by composer files.

Sounds like a good idea, best to never expose the hypervisor itself when possible.

Hughlander
May 11, 2005

RVWinkle posted:

I spent some more time thinking about LXC and security and I'm not sure it makes sense to run containers right on the hypervisor. Even if you run unprivileged, you end up with issues like the inability to snapshot when using nfs mounts. I ended up deploying a RancherOS vm instead. I used it a while back when they tried to make it the official FreeNAS docker solution and it's pretty cool how the whole OS is defined by composer files.

As I said that's the preferred way of doing it. I just chose not to because I didn't want to pay the memory penalty a full VM would have, and it's a home use anyway.

Hughlander
May 11, 2005

RVWinkle posted:

I spent some more time thinking about LXC and security and I'm not sure it makes sense to run containers right on the hypervisor. Even if you run unprivileged, you end up with issues like the inability to snapshot when using nfs mounts. I ended up deploying a RancherOS vm instead. I used it a while back when they tried to make it the official FreeNAS docker solution and it's pretty cool how the whole OS is defined by composer files.

I also forgot one reason why I like putting docker containers on the proxmox host. I use the zfs driver in docker and zfs volumes. So you get all the COW goodness as well as snapshots and zfs send/receive backups. You can't pass the zfs devices to an LXC, and mounting iSCSI is a blackbox obviously to the host. May not be a good reason still but is a reason.

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost

Hughlander posted:

I also forgot one reason why I like putting docker containers on the proxmox host. I use the zfs driver in docker and zfs volumes. So you get all the COW goodness as well as snapshots and zfs send/receive backups. You can't pass the zfs devices to an LXC, and mounting iSCSI is a blackbox obviously to the host. May not be a good reason still but is a reason.

Nice, I'd like to learn more about the zfs driver for docker. I guess for all my bitching about systems requiring standalone servers, storage is the one thing I prefer to keep separate. Setting up proxmox has allowed me to abandon all of my freenas jails and I feel much safer in allowing services through the firewall.

SamDabbers
May 26, 2003



What about FreeNAS jails makes you feel less safe than Proxmox?

RVWinkle
Aug 24, 2004

In relating the circumstances which have led to my confinement within this refuge for the demented, I am aware that my present position will create a natural doubt of the authenticity of my narrative.
Nap Ghost

SamDabbers posted:

What about FreeNAS jails makes you feel less safe than Proxmox?

Don't get me wrong, I'm a big fan of FreeNAS and BSD but there are a number of security concerns with running services on your storage. Generally, if possible, I prefer to segregate applications and storage to separate physical servers or clusters. The big concern is container breakout where if your hypervisor is compromised, it's bad, but if a bad actor gets root on your storage environment then everything can be lost. I think the general consensus is that this more of a best practice than a major security issue but I think it makes sense.

Of course, two physical servers isn't always possible in a home environment due to costs and whatnot but I have recently been fortunate to inherit a computer that's perfect for running a hypervisor. Last year, I had moved most of my jails to a raspberry pi with Docker and it was awesome. The big exception was Plex as it needed more horsepower for transcoding but I absolutely refused to expose my plex jail to the internet.

I have run jails on my FreeNAS for years but I was always afraid to connect them to the internet because that meant my storage was exposed to some degree and I treated it as a security risk. There are a number of instances where software on FreeNAS isn't up to date and potential security issues may not be patched right away. Take Plex, for example, the BSD version always lags behind the release version. Even worse is how FreeNAS has historically been behind compared to the BSD release schedule. They have really turned this around in the past 6 months but there was a period of a couple months last year where 11.2 fell out support and you couldn't get security updates for your jails. These security issues aren't the end of the world in a home environment but it makes sense to avoid them if possible.

Sri.Theo
Apr 16, 2008
Does anyone have a good recent guide to using virtual box on a modern Mac?

I'm following the obvious steps and get it to show a BIOS but it never actually loads into Mac OS - I'm thinking there may be some licensing issues?

Mr Shiny Pants
Nov 12, 2012

Hughlander posted:

I also forgot one reason why I like putting docker containers on the proxmox host. I use the zfs driver in docker and zfs volumes. So you get all the COW goodness as well as snapshots and zfs send/receive backups. You can't pass the zfs devices to an LXC, and mounting iSCSI is a blackbox obviously to the host. May not be a good reason still but is a reason.

The performance is horrible in my experience, especially when doing some intensive stuff. I decided to create a ZVOL that is ext4 formatted and use that instead. Things that took minutes now take seconds, it really is night and day.
Something as dumb as reconfiguring NGINX and having it rebuild the container would take a minute or more.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Anyone that wants GPU paravirtualization in their Hyper-V Windows 10 guests, follow these instructions.

https://forum.cfx.re/t/running-fivem-in-a-hyper-v-vm-with-full-gpu-performance-for-testing-gpu-partitioning/1281205

Been grinding my teeth over this for ages, when I finally run into this and turns out you have to sort of install the drivers :suicide:

(Note the HostRepository stuff, which is specific to the VM.)

--edit:
Proof is in the pudding: https://i.imgur.com/3cjR4CK.jpg

Combat Pretzel fucked around with this message at 02:18 on Jul 21, 2020

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Is there any way to tune RDP as to maximize image quality? When there's a lot going on on-screen, it starts to affect the image quality, with which I can live with, so long the display's busy. But once everything on screen freezes, I'm often left with artifacts. Is there a setting in RDP to force a full update when UI activity settles?

Essentially this poo poo (the green tinted stripes and blocks):

Internet Explorer
Jun 1, 2005





Not really. Citrix and the like allow for way more customization.

https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/remote-desktop/session-hosts

You can try disabling the bitmapcache, but I will say I've never done that and don't know what the ramifications will be.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I think I found it. In the Group Policy editor in the VM, RDS -> Remote Desktop Session Host -> Remote Session Environment -> Configure RemoteFX Adaptive Graphics to Lossless.

I mean, I'm using RDP over VMBus, either via VMConnect or a carefully crafted .rdp file, so I couldn't care less about compression.

--edit:
Heh, it even stutters less in lossless mode.

namlosh
Feb 11, 2014

I name this haircut "The Sad Rhino".

Combat Pretzel posted:

I think I found it. In the Group Policy editor in the VM, RDS -> Remote Desktop Session Host -> Remote Session Environment -> Configure RemoteFX Adaptive Graphics to Lossless.

I mean, I'm using RDP over VMBus, either via VMConnect or a carefully crafted .rdp file, so I couldn't care less about compression.

--edit:
Heh, it even stutters less in lossless mode.

This may be dumb, but would this affect anything with the client-side Citrix receiver software? That’s how I log into work and it has some annoying issues but there doesn’t seem to be a lot of configuration i can do.

Wait, I just realized that was on the host side, which I probably can’t change. Oh well... I’ll leave it here in case anyone has any ideas

Internet Explorer
Jun 1, 2005





Combat Pretzel posted:

I think I found it. In the Group Policy editor in the VM, RDS -> Remote Desktop Session Host -> Remote Session Environment -> Configure RemoteFX Adaptive Graphics to Lossless.

I mean, I'm using RDP over VMBus, either via VMConnect or a carefully crafted .rdp file, so I couldn't care less about compression.

--edit:
Heh, it even stutters less in lossless mode.

drat, my bad for giving you wrong info. I did not see that in my search.

namlosh posted:

This may be dumb, but would this affect anything with the client-side Citrix receiver software? That’s how I log into work and it has some annoying issues but there doesn’t seem to be a lot of configuration i can do.

Wait, I just realized that was on the host side, which I probably can’t change. Oh well... I’ll leave it here in case anyone has any ideas

A lot of RDS settings apply to Citrix as well, but Citrix also has a bunch of settings that can be configured on the server side.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

I upgraded from 6.0 to 6.7 167 days ago...it seems like taking a snapshot is pausing the VM’s for quite a bit. I don’t remember it doing this before, what to check? Traffic on the nimble we used for storage is actually lower than it has been for a while (probably Covid related), my first thought was high I/O

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

namlosh posted:

This may be dumb, but would this affect anything with the client-side Citrix receiver software? That’s how I log into work and it has some annoying issues but there doesn’t seem to be a lot of configuration i can do.
In my new workplace, I'm currently also using a thin client running Citrix, and it creates similar obnoxious artifacts as to what I posted earlier. You're scrolling through some PDFs of generated graphs and poo poo, and there's huge grey blocks in random positions, macro blocking in gradients and what not. God I hate that thin client fad. I wish I could override the server settings.

eonwe
Aug 11, 2008



Lipstick Apathy
well, I am the Nutanix admin for our company now....time to learn...anything about it

Pile Of Garbage
May 28, 2007



Combat Pretzel posted:

I think I found it. In the Group Policy editor in the VM, RDS -> Remote Desktop Session Host -> Remote Session Environment -> Configure RemoteFX Adaptive Graphics to Lossless.

I mean, I'm using RDP over VMBus, either via VMConnect or a carefully crafted .rdp file, so I couldn't care less about compression.

--edit:
Heh, it even stutters less in lossless mode.

Make sure you're allowing both tcp/3389 and udp/3389. RDP will use both and prefers the latter for better performance.

Pikehead
Dec 3, 2006

Looking for WMDs, PM if you have A+ grade stuff
Fun Shoe
Does anyone know where Dell has put their esxi driver software depot for VMware Update Manager?

I found https://vmwaredepot.dell.com/index.xml but I can't see drivers, only modules for stuff I don't want to install.

Adbot
ADBOT LOVES YOU

Pikehead
Dec 3, 2006

Looking for WMDs, PM if you have A+ grade stuff
Fun Shoe

Pikehead posted:

Does anyone know where Dell has put their esxi driver software depot for VMware Update Manager?

I found https://vmwaredepot.dell.com/index.xml but I can't see drivers, only modules for stuff I don't want to install.

I'm pretty sure there is no software depot for Dell drivers - it looks like they update their Update isos - so they'll release an iso in 2020 Feb at vmware version X, and then add newer drivers to it right up (at the moment) to June without changing the name of the iso.

In the end I added an Update Baseline in VUM and applied drivers that way, and then just ran a patch baseline after it to get the vmware bits updated.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply