|
Wow, that's depressing. Ease and simplicity of upgrades are a huge part of their sales pitch.
|
# ? Jun 24, 2020 02:50 |
|
|
# ? May 9, 2024 18:21 |
|
GrandMaster posted:Yeah it's a very good point, obviously failover stuff was all tested during commissioning, and we have multiple non-prod environments for all the critical apps, but it's all on the same hardware. The higher-ups wanted ONE BIG CLUSTER and completely disregarded advice to split non prod environments onto physically separate hardware like we used to have pre-hci. I've heard nothing good sadly. Hopefully with vSphere lifecycle manager (in 7.0) it'll be come better/easier
|
# ? Jun 25, 2020 00:38 |
|
I know hyperconverged certainly has its use cases and theoretically can simplify troubleshooting since you have a verified stack and a single vendor, but I’m just not convinced by it yet especially with moving as much as we are to SaaS or IaaS on Azure for tier 0/1 apps. We’re on UCS/Pure for our VMware clusters and couldn’t be happier, it’s a pretty drat solid combination.
|
# ? Jun 25, 2020 00:48 |
|
devmd01 posted:I know hyperconverged certainly has its use cases and theoretically can simplify troubleshooting since you have a verified stack and a single vendor, but I’m just not convinced by it yet especially with moving as much as we are to SaaS or IaaS on Azure for tier 0/1 apps. My previous work moved to vSAN (before I started there) because "traditional SANs are too complicated and we don't want to have a "Storage Admin"". They were really used to the old school SANs and didn't realize that a lot of the newer generation stuff (like Pure) you no longer need to even know what the backend storage is doing from a RAID perspective or to optimize resilience and performance, you basically present LUN and off you go.
|
# ? Jun 25, 2020 17:25 |
|
My only problem with Pure is that the moment we went to year-to-year renewal instead of the three-year they wanted us to sign, my assigned customer engineer suddenly got real hard to get hold of. They continue to just work and do everything we want, and when we open a case for proactive support they're right there to help, so I'm not reaaaaal upset, just miffed.
|
# ? Jun 25, 2020 18:34 |
|
So I spent 8 hours yesterday struggling with the oVirt installation only to realize that I fundamentally disagree with the external DNS requirement of the platform. The documentation specifically states that it doesn't support DNS server virtual machines and installation requires a standalone DNS server. Anyway, now I'm trying to decide on which hypervisor/management interface to use on my home network. My main goals are to run pfsense and Plex but I also support a vmware environment at work and would like to find a low cost alternative. Right now I'm trying to decide between Hyper-V, XCP-NG, and KVM/Proxmox. I guess this question probably gets asked often but does anybody have any suggestions?
|
# ? Jul 1, 2020 13:41 |
|
devmd01 posted:I know hyperconverged certainly has its use cases and theoretically can simplify troubleshooting since you have a verified stack and a single vendor, but I’m just not convinced by it yet especially with moving as much as we are to SaaS or IaaS on Azure for tier 0/1 apps. To me, hyperconverged is for a couple of situations (on prem) * where there's a minimal footprint required but redundancy at the host/storage level is required * the business doesn't want to deal with seperate storage vSAN seems alright to me, Nutanix is fine (but not vmware on nutanix). I prefer the traditional compute + SAN for onprem, but that's what I learned on.
|
# ? Jul 2, 2020 11:16 |
|
how do you get onprem san with meaningful redundancy for less than a quarter of a million dollars though
|
# ? Jul 2, 2020 20:55 |
|
I have 35 TB of active/active storage and I think we paid $100k. Some of it is rotational though. I guess the secret is to not need high performance?
|
# ? Jul 2, 2020 21:20 |
|
Happiness Commando posted:I have 35 TB of active/active storage and I think we paid $100k. Some of it is rotational though. I guess the secret is to not need high performance? One of my clients has five vSAN nodes for $150,000 with similar capacity, all on SSDs. That includes compute and licensing, not just storage. I kind of don't get it?
|
# ? Jul 2, 2020 21:43 |
|
I use Virtualbox for an Ubuntu VM to dabble in programming and media. I've been having host RAM issues lately (i think its RAM) and I just tried to launch my ubuntu VM and the .vdi is inaccessible. After some googling and research, I've found that my .vdi file is 0KB. I'm guessing that means its corrupted. Is there any magic I can use to try repairing the .vdi file, or bring it to life long enough to get some media out of the VM?
|
# ? Jul 3, 2020 02:09 |
|
Potato Salad posted:how do you get onprem san with meaningful redundancy for less than a quarter of a million dollars though Dude, storage is relatively cheap, and most SAN gear is 25GB+ fiber minimum now.
|
# ? Jul 3, 2020 03:10 |
|
RVWinkle posted:So I spent 8 hours yesterday struggling with the oVirt installation only to realize that I fundamentally disagree with the external DNS requirement of the platform. The documentation specifically states that it doesn't support DNS server virtual machines and installation requires a standalone DNS server. Anyway, now I'm trying to decide on which hypervisor/management interface to use on my home network. My main goals are to run pfsense and Plex but I also support a vmware environment at work and would like to find a low cost alternative. Right now I'm trying to decide between Hyper-V, XCP-NG, and KVM/Proxmox. I guess this question probably gets asked often but does anybody have any suggestions? I ended up testing Hyper-V and it has similar issues that oVirt has where it really wants a domain controller before it will connect to a network share. Next, I tried out Proxmox and I'm seriously impressed! The UI just makes sense to me and I was able to configure everything without issues. Now I just need to create an LXC for docker and Portainer because that feels like the right thing to do.
|
# ? Jul 3, 2020 05:46 |
|
RVWinkle posted:I ended up testing Hyper-V and it has similar issues that oVirt has where it really wants a domain controller before it will connect to a network share. Next, I tried out Proxmox and I'm seriously impressed! The UI just makes sense to me and I was able to configure everything without issues. Now I just need to create an LXC for docker and Portainer because that feels like the right thing to do. Running it as an LXC is recommended, but in a recent rebuild of some things I moved it straight to the hypervisor so I didn't need to pass in mount points / restart the XLC if new ZFS shares were created. If you go down that route you'll need to use their nested flag and I think that's it, I can't recall since it's all done by ansible now. I do love Proxmox though, have a 2 node cluster one being a 100TB NAS other being dedicated docker/VM host with 128GB memory.
|
# ? Jul 3, 2020 18:20 |
|
Hughlander posted:Running it as an LXC is recommended, but in a recent rebuild of some things I moved it straight to the hypervisor so I didn't need to pass in mount points / restart the XLC if new ZFS shares were created. If you go down that route you'll need to use their nested flag and I think that's it, I can't recall since it's all done by ansible now. Haha, well I guess you won't be surprised to find that I spent quite a bit of time struggling with mount points yesterday. I did it the hard way starting with how I expected it work and then slowly discovering how it actually works. Today I'm going to mess with bind mounts and pray I don't get stuck in permissions hell.
|
# ? Jul 4, 2020 15:52 |
|
RVWinkle posted:Haha, well I guess you won't be surprised to find that I spent quite a bit of time struggling with mount points yesterday. I did it the hard way starting with how I expected it work and then slowly discovering how it actually works. Today I'm going to mess with bind mounts and pray I don't get stuck in permissions hell. I haven't published it to github yet, but I wrote a set of ansible scripts that help with this: - Destroy existing file servers, cleaning up ssh keys on the two proxmox hosts - Create one new LXC host on each proxmox host to serve files over cifs/nfs - Iterate over all ZFS datasets and add them to mountpoints for the fileservers - Ignore a few like /rpool/ROOT - Give aliases to others like /rpool/subvol-115-disk-0: /nodes/abba - set up etc/exports to the local subnets - set up smb.conf with a local workgroup - Create users for smb to connect based on the inventory file - export each of the mountpoints over smb - Go to the proxmox servers and edit /etc/fstab to mount all connections not on this machine. So if I do add a mountpoint to zfs, I just destroy the fileserver for that host, then rerun the create fileservers. I also haven't pushed my changes to proxmox ansible module yet that handles the nfs nesting but I have that locally as well.
|
# ? Jul 4, 2020 16:22 |
|
Potato Salad posted:One of my clients has five vSAN nodes for $150,000 with similar capacity, all on SSDs. That includes compute and licensing, not just storage. If you have workloads that deduplicate and compress reasonably well (which is most general purpose virtual workloads) then you can get a good bit of all flash capacity for relatively cheap. VSAN can also be pretty cost effective but it is strictly worse for raw:usable capacity in most every situation than external arrays and there are performance implications of running erasure coding, dedupe and compression on the platform. It’s also very feature poor as storage, with no zero impact snapshot capabilities, no native replication, no zero space clones, etc...it’s pretty bare bones storage. I actually think that’s fine and those operations are better handled above the storage layer, but there are still good reasons why a customer might want to use those features and vSAN won’t fit the bill. YOLOsubmarine fucked around with this message at 22:15 on Jul 4, 2020 |
# ? Jul 4, 2020 19:50 |
|
Welp, it looks like I'm stuck on permissions as I anticipated. Basically, I want to mount a CIFS share as R/W in an LXC for a transmission docker container. On the pve host, I have the following in /etc/fstab: //x.x.x.x/Downloads /mnt/Downloads/ cifs credentials=/etc/win-credentials 0 0 It mounts correctly on the pve host, can read and write, and shows root as the owner. Next, I add it into /etc/pve/lxc/200.conf: mp0: /mnt/Downloads,mp=/mnt/Downloads It mounts as read only in the lxc container with keyctl=1,nesting=1 and unprivileged: 1. Finally, I follow the lxc.idmap instructions and everything goes to hell: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers#Using_local_directory_bind_mount_points If I follow the directions exactly, it fails after adding the second set of lxc.idmap commands and the container will no longer start with the following error: newuidmap: write to uid_map failed: Invalid argument I'm trying to follow the instructions exactly just as a proof of concept but I think I just need to map root to root. This seems like it's defeating the purpose of an unprivileged container but I'm not sure if there's another way to mount a CIFS share. By the way, the file share works fine in privileged mode but then docker fails with an AppArmor permission denied error. Any suggestions would be appreciated.
|
# ? Jul 4, 2020 23:11 |
|
I spent some more time thinking about LXC and security and I'm not sure it makes sense to run containers right on the hypervisor. Even if you run unprivileged, you end up with issues like the inability to snapshot when using nfs mounts. I ended up deploying a RancherOS vm instead. I used it a while back when they tried to make it the official FreeNAS docker solution and it's pretty cool how the whole OS is defined by composer files.
|
# ? Jul 6, 2020 05:29 |
|
YOLOsubmarine posted:It’s also very feature poor as storage, with no zero impact snapshot capabilities, no native replication, no zero space clones, etc...it’s pretty bare bones storage aaaaaaaah I wasn't thinking about this right
|
# ? Jul 6, 2020 17:12 |
|
RVWinkle posted:I spent some more time thinking about LXC and security and I'm not sure it makes sense to run containers right on the hypervisor. Even if you run unprivileged, you end up with issues like the inability to snapshot when using nfs mounts. I ended up deploying a RancherOS vm instead. I used it a while back when they tried to make it the official FreeNAS docker solution and it's pretty cool how the whole OS is defined by composer files. Sounds like a good idea, best to never expose the hypervisor itself when possible.
|
# ? Jul 6, 2020 21:18 |
|
RVWinkle posted:I spent some more time thinking about LXC and security and I'm not sure it makes sense to run containers right on the hypervisor. Even if you run unprivileged, you end up with issues like the inability to snapshot when using nfs mounts. I ended up deploying a RancherOS vm instead. I used it a while back when they tried to make it the official FreeNAS docker solution and it's pretty cool how the whole OS is defined by composer files. As I said that's the preferred way of doing it. I just chose not to because I didn't want to pay the memory penalty a full VM would have, and it's a home use anyway.
|
# ? Jul 6, 2020 22:20 |
|
RVWinkle posted:I spent some more time thinking about LXC and security and I'm not sure it makes sense to run containers right on the hypervisor. Even if you run unprivileged, you end up with issues like the inability to snapshot when using nfs mounts. I ended up deploying a RancherOS vm instead. I used it a while back when they tried to make it the official FreeNAS docker solution and it's pretty cool how the whole OS is defined by composer files. I also forgot one reason why I like putting docker containers on the proxmox host. I use the zfs driver in docker and zfs volumes. So you get all the COW goodness as well as snapshots and zfs send/receive backups. You can't pass the zfs devices to an LXC, and mounting iSCSI is a blackbox obviously to the host. May not be a good reason still but is a reason.
|
# ? Jul 8, 2020 15:26 |
|
Hughlander posted:I also forgot one reason why I like putting docker containers on the proxmox host. I use the zfs driver in docker and zfs volumes. So you get all the COW goodness as well as snapshots and zfs send/receive backups. You can't pass the zfs devices to an LXC, and mounting iSCSI is a blackbox obviously to the host. May not be a good reason still but is a reason. Nice, I'd like to learn more about the zfs driver for docker. I guess for all my bitching about systems requiring standalone servers, storage is the one thing I prefer to keep separate. Setting up proxmox has allowed me to abandon all of my freenas jails and I feel much safer in allowing services through the firewall.
|
# ? Jul 9, 2020 01:38 |
|
What about FreeNAS jails makes you feel less safe than Proxmox?
|
# ? Jul 9, 2020 17:14 |
|
SamDabbers posted:What about FreeNAS jails makes you feel less safe than Proxmox? Don't get me wrong, I'm a big fan of FreeNAS and BSD but there are a number of security concerns with running services on your storage. Generally, if possible, I prefer to segregate applications and storage to separate physical servers or clusters. The big concern is container breakout where if your hypervisor is compromised, it's bad, but if a bad actor gets root on your storage environment then everything can be lost. I think the general consensus is that this more of a best practice than a major security issue but I think it makes sense. Of course, two physical servers isn't always possible in a home environment due to costs and whatnot but I have recently been fortunate to inherit a computer that's perfect for running a hypervisor. Last year, I had moved most of my jails to a raspberry pi with Docker and it was awesome. The big exception was Plex as it needed more horsepower for transcoding but I absolutely refused to expose my plex jail to the internet. I have run jails on my FreeNAS for years but I was always afraid to connect them to the internet because that meant my storage was exposed to some degree and I treated it as a security risk. There are a number of instances where software on FreeNAS isn't up to date and potential security issues may not be patched right away. Take Plex, for example, the BSD version always lags behind the release version. Even worse is how FreeNAS has historically been behind compared to the BSD release schedule. They have really turned this around in the past 6 months but there was a period of a couple months last year where 11.2 fell out support and you couldn't get security updates for your jails. These security issues aren't the end of the world in a home environment but it makes sense to avoid them if possible.
|
# ? Jul 11, 2020 14:50 |
|
Does anyone have a good recent guide to using virtual box on a modern Mac? I'm following the obvious steps and get it to show a BIOS but it never actually loads into Mac OS - I'm thinking there may be some licensing issues?
|
# ? Jul 11, 2020 15:26 |
|
Hughlander posted:I also forgot one reason why I like putting docker containers on the proxmox host. I use the zfs driver in docker and zfs volumes. So you get all the COW goodness as well as snapshots and zfs send/receive backups. You can't pass the zfs devices to an LXC, and mounting iSCSI is a blackbox obviously to the host. May not be a good reason still but is a reason. The performance is horrible in my experience, especially when doing some intensive stuff. I decided to create a ZVOL that is ext4 formatted and use that instead. Things that took minutes now take seconds, it really is night and day. Something as dumb as reconfiguring NGINX and having it rebuild the container would take a minute or more.
|
# ? Jul 16, 2020 09:02 |
|
Anyone that wants GPU paravirtualization in their Hyper-V Windows 10 guests, follow these instructions. https://forum.cfx.re/t/running-fivem-in-a-hyper-v-vm-with-full-gpu-performance-for-testing-gpu-partitioning/1281205 Been grinding my teeth over this for ages, when I finally run into this and turns out you have to sort of install the drivers (Note the HostRepository stuff, which is specific to the VM.) --edit: Proof is in the pudding: https://i.imgur.com/3cjR4CK.jpg Combat Pretzel fucked around with this message at 02:18 on Jul 21, 2020 |
# ? Jul 21, 2020 02:00 |
|
Is there any way to tune RDP as to maximize image quality? When there's a lot going on on-screen, it starts to affect the image quality, with which I can live with, so long the display's busy. But once everything on screen freezes, I'm often left with artifacts. Is there a setting in RDP to force a full update when UI activity settles? Essentially this poo poo (the green tinted stripes and blocks):
|
# ? Jul 22, 2020 20:33 |
|
Not really. Citrix and the like allow for way more customization. https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/remote-desktop/session-hosts You can try disabling the bitmapcache, but I will say I've never done that and don't know what the ramifications will be.
|
# ? Jul 22, 2020 20:43 |
|
I think I found it. In the Group Policy editor in the VM, RDS -> Remote Desktop Session Host -> Remote Session Environment -> Configure RemoteFX Adaptive Graphics to Lossless. I mean, I'm using RDP over VMBus, either via VMConnect or a carefully crafted .rdp file, so I couldn't care less about compression. --edit: Heh, it even stutters less in lossless mode.
|
# ? Jul 22, 2020 21:03 |
|
Combat Pretzel posted:I think I found it. In the Group Policy editor in the VM, RDS -> Remote Desktop Session Host -> Remote Session Environment -> Configure RemoteFX Adaptive Graphics to Lossless. This may be dumb, but would this affect anything with the client-side Citrix receiver software? That’s how I log into work and it has some annoying issues but there doesn’t seem to be a lot of configuration i can do. Wait, I just realized that was on the host side, which I probably can’t change. Oh well... I’ll leave it here in case anyone has any ideas
|
# ? Jul 22, 2020 23:03 |
|
Combat Pretzel posted:I think I found it. In the Group Policy editor in the VM, RDS -> Remote Desktop Session Host -> Remote Session Environment -> Configure RemoteFX Adaptive Graphics to Lossless. drat, my bad for giving you wrong info. I did not see that in my search. namlosh posted:This may be dumb, but would this affect anything with the client-side Citrix receiver software? That’s how I log into work and it has some annoying issues but there doesn’t seem to be a lot of configuration i can do. A lot of RDS settings apply to Citrix as well, but Citrix also has a bunch of settings that can be configured on the server side.
|
# ? Jul 22, 2020 23:07 |
|
I upgraded from 6.0 to 6.7 167 days ago...it seems like taking a snapshot is pausing the VM’s for quite a bit. I don’t remember it doing this before, what to check? Traffic on the nimble we used for storage is actually lower than it has been for a while (probably Covid related), my first thought was high I/O
|
# ? Jul 23, 2020 00:09 |
|
namlosh posted:This may be dumb, but would this affect anything with the client-side Citrix receiver software? That’s how I log into work and it has some annoying issues but there doesn’t seem to be a lot of configuration i can do.
|
# ? Jul 23, 2020 00:29 |
well, I am the Nutanix admin for our company now....time to learn...anything about it
|
|
# ? Jul 23, 2020 02:10 |
|
Combat Pretzel posted:I think I found it. In the Group Policy editor in the VM, RDS -> Remote Desktop Session Host -> Remote Session Environment -> Configure RemoteFX Adaptive Graphics to Lossless. Make sure you're allowing both tcp/3389 and udp/3389. RDP will use both and prefers the latter for better performance.
|
# ? Jul 23, 2020 04:12 |
|
Does anyone know where Dell has put their esxi driver software depot for VMware Update Manager? I found https://vmwaredepot.dell.com/index.xml but I can't see drivers, only modules for stuff I don't want to install.
|
# ? Jul 23, 2020 11:20 |
|
|
# ? May 9, 2024 18:21 |
|
Pikehead posted:Does anyone know where Dell has put their esxi driver software depot for VMware Update Manager? I'm pretty sure there is no software depot for Dell drivers - it looks like they update their Update isos - so they'll release an iso in 2020 Feb at vmware version X, and then add newer drivers to it right up (at the moment) to June without changing the name of the iso. In the end I added an Update Baseline in VUM and applied drivers that way, and then just ran a patch baseline after it to get the vmware bits updated.
|
# ? Jul 29, 2020 10:20 |