|
IT Guy posted:So I haven't really looked past vSphere ESXi, but two of you mentioned Essentials so I went and looked at what it was and apparently it's cheap as hell and comes with vSphere Server? That means I can migrate VMs from one host/datastore to another, correct? You need to look into this for your environment. http://store.vmware.com/store/vmware/en_US/pd/ThemeID.2485600/productID.284146600 Essentials Plus is a super good deal for what you get Essentials is 500 because it really doesn't do anything cool http://store.vmware.com/store/vmware/Content/pbPage.vSphereComparePage As far as HW can you reuse any of your current servers? http://www.vmware.com/resources/compatibility/search.php Dilbert As FUCK fucked around with this message at 18:06 on Sep 16, 2013 |
# ? Sep 16, 2013 18:04 |
|
|
# ? May 20, 2024 00:36 |
|
Essentials is the bare minimum kit that exists, and yes it's like $300ish after typical VAR discounts. It lets you manage up to 3 hosts and gets you access to Update Manager for patching. But notably it does NOT let you do vMotion which is moving around powered on VM's, or HA which automatically relocates VM's when a host dies. You need Essentials Plus for that. Which honestly is still a steal, it's something like $4k for up to 3 hosts. The big limitation of the Essentials kits is the 3 host max. Any more than that and you need to be on at least Standard which is a significant price jump. With the size of environment you're talking about, that's major overkill. Essentials Plus is your sweet spot. Docjowles fucked around with this message at 18:14 on Sep 16, 2013 |
# ? Sep 16, 2013 18:08 |
|
IT Guy posted:So I haven't really looked past vSphere ESXi, but two of you mentioned Essentials so I went and looked at what it was and apparently it's cheap as hell and comes with vSphere Server? That means I can migrate VMs from one host/datastore to another, correct?
|
# ? Sep 16, 2013 18:45 |
|
Dilbert As gently caress posted:
That's kind of the problem. The current way we buy hardware is to buy cheap poo poo that basically runs the server itself and nothing more. Our 3 newer servers are Dell T110's with single CPU Intel E3 chips with 16GB of RAM and onboard NICs. RAM is obviously upgradable as is the NICs but the CPU might not be good enough for virtualization. Docjowles posted:or HA which automatically relocates VM's when a host dies. Does HA use resources on the other host all the time or only when it actually has to fail over? IT Guy fucked around with this message at 19:41 on Sep 16, 2013 |
# ? Sep 16, 2013 19:38 |
|
IT Guy posted:Does HA use resources on the other host all the time or only when it actually has to fail over?
|
# ? Sep 16, 2013 19:46 |
|
A lot of people confuse HA and FT (fault tolerance). HA is what Misogynist describes. FT runs two copies of a VM on two hosts in sync, so that if one host dies it will fail over instantly instead of being down for a few moments til the new VM boots. It's generally regarded as terrible and unusable.
|
# ? Sep 16, 2013 19:49 |
|
Docjowles posted:A lot of people confuse HA and FT (fault tolerance). HA is what Misogynist describes. FT runs two copies of a VM on two hosts in sync, so that if one host dies it will fail over instantly instead of being down for a few moments til the new VM boots. It's generally regarded as terrible and unusable.
|
# ? Sep 16, 2013 20:06 |
|
Ah yes, I was under the impression that HA was FT. HA is more of what we'd need. I don't think we would ever need fault tolerance.
|
# ? Sep 16, 2013 20:15 |
|
The HA / FT stuff is a very common VMware job question so take note. HA basically just means "Can this VM be rebooted on the other hosts in the cluster if one host fails?", so you need shared storage, some leftover resources on the hosts, etc. I recommend setting an admission policy for the cluster, but use percent reserved instead of host failures tolerated because host failure calculations are really weird. It definitely helps to use memory reservations too. That way, you're sure you can have HA / DRS move VMs without causing memory contention, and with a reservation you no longer need a VSWP file which can save some space on the SAN.
|
# ? Sep 16, 2013 20:19 |
|
IT Guy posted:Our 3 newer servers are Dell T110's with single CPU Intel E3 chips with 16GB of RAM and onboard NICs. RAM is obviously upgradable as is the NICs but the CPU might not be good enough for virtualization. I wouldn't worry about CPUs much. With no sarcasm, I run my vSphere cluster on E-350s with 8GB of memory (because I don't do anything serious with VMware these days but I still need something fast enough to prototype on or compare features). The T110's CPU is fine.
|
# ? Sep 16, 2013 20:20 |
|
Yeah, on almost every single small network I can think of, memory and disk size are your limiting factors. You need a LOT of users or a seriously intense application to peg out a modern CPU. Best practices, you want the minimum amount of vCPUs possible. Try to keep VM vCPU usage between 30-80%. If you go above 80%, add another vCPU. Generally, adding unused vCPUs adds overhead in memory and can result in some weird contention issues from CPU scheduling. Network contention can be an issue, but usually only if you're not using separate NICs for iSCSI / FCOE and vMotion. Generally, they should all be segregated unless you've got 10Gb. El_Matarife fucked around with this message at 20:25 on Sep 16, 2013 |
# ? Sep 16, 2013 20:22 |
|
Agreed. CPU for days, never enough memory. New hosts are coming in soon though with 256GB which will help a lot.
|
# ? Sep 16, 2013 20:32 |
|
Interesting. I guess I could give it a shot. First I'm going to come up with a serious 5 year plan though to hopefully get some more funds. The only single point of failure that really worries me is that they won't spend enough on a NAS/SAN that has redundant controller and nic and we'll be left down for a couple days.
|
# ? Sep 16, 2013 20:39 |
|
Misogynist posted:it's recommended that you reserve those resources (this is an HA setting) to ensure the other hosts in your cluster have enough spare capacity to start up your VMs in the event of a complete host failure. 1) make sure you have n+1 capacity (run 3 hosts at 66% or less, then a single host failure will put the two remaining near 100%) 2) accept an over capacity situation in the event you lose a host. It's a pretty rare event, so seeing some memory pressure and cpu wait time might be palatable to your management in order to save some money. We go with option 1 in our environment, we have 6 hosts in our primary cluster and run each one around 80% on CPU and RAM.
|
# ? Sep 17, 2013 00:05 |
|
IT Guy posted:That's kind of the problem. The current way we buy hardware is to buy cheap poo poo that basically runs the server itself and nothing more. That really will take you a lot farther than you think. http://www.vmware.com/resources/com...r&sortOrder=Asc Of those T110's 16GB host how much ram is actually being used? Evaluate your hosts on CPU(which is probably <10%), memory, and disk loads. Chances are you can buy a Intel net DP/QP adapter off CDW throw it in there and get what you need out of the box. Vmware also has Transparent Page sharing, thus like blocks of memory can be 'de-duped' in a sense and provide bigger increases in memory utilization. Not to mention things like bios updates and firmware updates can also help squeeze extra performance out of those boxes. IT Guy posted:Interesting. I guess I could give it a shot. First I'm going to come up with a serious 5 year plan though to hopefully get some more funds. The only single point of failure that really worries me is that they won't spend enough on a NAS/SAN that has redundant controller and nic and we'll be left down for a couple days. There is software out there which can take DAS(direct attached Storage) and provide it as shared storage to all hosts for things such as HA/DRS/vMotion. Vmware has one called the VSA, however keep in mind this has some sizable limitations, Nexenta also has one which is pretty good and ZFS based, vSAN's is also coming down the pipe as we speak and seems pretty cool. A redundent controller NAS isn't terribly expensive granted it isn't free but, the VNXe's are fairly price competitive, and HP's MSA's are well you get what you pay for. Dilbert As FUCK fucked around with this message at 01:01 on Sep 17, 2013 |
# ? Sep 17, 2013 00:52 |
|
Dilbert As gently caress posted:That really will take you a lot farther than you think.
|
# ? Sep 17, 2013 00:55 |
|
Dilbert As gently caress posted:Vmware also has Transparent Page sharing, thus like blocks of memory can be 'de-duped' in a sense and provide bigger increases in memory utilization.
|
# ? Sep 17, 2013 01:06 |
|
IT Guy posted:Interesting. I guess I could give it a shot. First I'm going to come up with a serious 5 year plan though to hopefully get some more funds. The only single point of failure that really worries me is that they won't spend enough on a NAS/SAN that has redundant controller and nic and we'll be left down for a couple days. Evaluate what you have and what you can reuse, just because you are going virtual doesn't mean you can't reuse servers. Look at what you have, look at what you can reuse that is on the HCL, warranty can be renewed, and does not have known HW issues. You can stretch that 20k much farther than you think if you find out you can add some ram and network adapters into 3 hosts and just 1up the warranty. adorai posted:Since they changed their page size you really don't see any benefit until you get under memory pressure and it switches to smaller pages. Not a huge deal, but you have to be seeing performance issues before you get memory dedupe. In the end it probably ends up the same, but for sizing purposes, I don't think TPS is a factor any longer. I thought it was also based on if the host was over subscribed with more vRam than pRam. So even if you weren't hitting the X% you would still invoke TPS to start kicking off. Dilbert As FUCK fucked around with this message at 01:11 on Sep 17, 2013 |
# ? Sep 17, 2013 01:09 |
|
I've been auditing all of our VMware license keys and we're missing 4 somewhere. Not in our portal or order history. Followed up with my VAR, somewhere in the chain they were issued to a company with a similar name in UGANDA. Hopefully this isn't too painful to fix.
|
# ? Sep 17, 2013 03:15 |
|
skipdogg posted:I've been auditing all of our VMware license keys and we're missing 4 somewhere. Not in our portal or order history. Followed up with my VAR, somewhere in the chain they were issued to a company with a similar name in UGANDA. Hopefully this isn't too painful to fix. A lot of our smartnet stuff ends up on a different county's account with the same name but about 1500 miles away.
|
# ? Sep 17, 2013 03:19 |
|
I think someone needs to extend Cloud To Butt Plus to cover "software-defined datacenter".
|
# ? Sep 17, 2013 17:02 |
|
So it turns out years and years ago when we first started virtualizing we bought 2 acceleration kits, so we have 2 licenses for vCenter. We have 2 main datacenters, one on East Coast, one on the West. vCenter currently lives on the west coast. Is it possible/recommended/feasible to setup some sort of failover vCenter in the east coast DC in case west coast drops for some reason? I'm going to dig through my books and google, but I figured what the hell, I'll ask here too.
|
# ? Sep 17, 2013 18:32 |
|
I'm looking for a remote VDI solution that is secure but highly accessible... I.e. works over HTTPS, through proxies, hard to block. Legit business case: workers go on site, need to reach back to the main office desktop to demo the app (not portable). This usually works great over just Remote Desktop (RDWeb) the issue is all the that network people do to keep their networks safe from the Chineese/NSA. As I have said we are currently using RDWeb/RDGateway/Hyper-V for this. We have vSphere/vCenter for another item so Horizon is on the table. I have requested a sales call from Citrix. Really, I'm looking for https://myhypervfarm.logmein.com | https://myvdi.join.me - (except hosted on our domain login.mybiz.com/) Please advise of anything worth looking at that is not VMWare or Citrix (is there a separate Micrsoft product I am missing?). Is there a specific aspect of either of those two that is better for what I am looking at. Note it's 12:00am and I have been working these issues for the last 16 hours, so any option is worth consideration if it's more reliable than what we have. I am throwing myself on the mercy of goon wisdom at this point.
|
# ? Sep 18, 2013 05:04 |
|
KennyG posted:I'm looking for a remote VDI solution that is secure but highly accessible... I.e. works over HTTPS, through proxies, hard to block. We use a Juniper SA SSL VPN for secure RDP access for our guys. We use a hardware model, but it's available as virtual image now. The thing is just a never ending font of useful features. Supports a ton of authentication methods, use some form of two factor. Can provide policy based host checking on login, AV installed and current, firewall on, etc etc. Then provides a web interface to predefined resources, internal websites, RDP, SMB Shares, etc. Can also provide gotomeeting style functionality to let me remote people view/control your employees desktops.
|
# ? Sep 18, 2013 14:07 |
|
skipdogg posted:So it turns out years and years ago when we first started virtualizing we bought 2 acceleration kits, so we have 2 licenses for vCenter. We have 2 main datacenters, one on East Coast, one on the West. vCenter currently lives on the west coast. Is it possible/recommended/feasible to setup some sort of failover vCenter in the east coast DC in case west coast drops for some reason? I'm going to dig through my books and google, but I figured what the hell, I'll ask here too. vCenter Heartbeat. Never heard of anyone using it though.
|
# ? Sep 18, 2013 14:39 |
|
KennyG posted:I'm looking for a remote VDI solution that is secure but highly accessible... I.e. works over HTTPS, through proxies, hard to block. XenDesktop works through 80/443 by default, and I've had good results with it in environments that lock everything down.
|
# ? Sep 18, 2013 14:52 |
|
KennyG posted:I'm looking for a remote VDI solution that is secure but highly accessible... I.e. works over HTTPS, through proxies, hard to block. Both Citrix and Horizon would allow you to do this. Are you looking for access to the desktop via web page? Horizon has HTML access which gives you your sound/video/desktop etc all through it. Citrix does the same but I think Citrix requires a client side plug-in. My View Lab should be on tonight I can PM the url if you want to see it in action.
|
# ? Sep 18, 2013 19:20 |
|
KennyG posted:I'm looking for a remote VDI solution that is secure but highly accessible... I.e. works over HTTPS, through proxies, hard to block. What's not working? RD Gateway should only need 443 incoming, which everyone should allow out.
|
# ? Sep 18, 2013 20:18 |
|
Okay, now that everything's been purchased and delivered, I'm in the process of setting up my first production ESXi cluster cluster. Looking at the Dell's documentation for the Powervault MD3220i, it looks like the SAN can have up to four iSCSI channels. Am I correct in understanding that each channel should be on separate VLANs and subnets? I'm also seeing that in VMWare's best practices guide that the management port should be able to failover to the vmotion port and vice versa, FT can failover to VKernel, etc, and that those should be on separate VLANS. Does this mean that each port on my Cisco switches should be in trunk mode so they can handle multiple VLANS? Thanks!
|
# ? Sep 18, 2013 23:12 |
|
Tequila25 posted:Okay, now that everything's been purchased and delivered, I'm in the process of setting up my first production ESXi cluster cluster. I used to run a few MD3220i, good little boxes redundancy, ease of use, management wise for the price. Setup each of your iSCSI IP addresses on their own subnet/vlan, that was a big thing that some Dell tech complained about when I had to call support for something unrelated. Correct on trunk mode. We are running 10GbE here, 2x10GbE and 4x1GbE per host. Currently I have the 2xGbE running into trunk ports on seperate switches with VLANs for MGMT, iSCSI and VM Network. Only using 2x1GbE for vMotion.
|
# ? Sep 18, 2013 23:18 |
|
three posted:XenDesktop works through 80/443 by default, and I've had good results with it in environments that lock everything down.
|
# ? Sep 19, 2013 00:24 |
|
adorai posted:In my experience you need an access gateway or vpx for xendesktop over the internet. the xendesktop broker will refer to desktops by hostname even, so not only do you need to be able to connect directly to them (without an access gateway or vpx) but you also need to resolve them by hostname. Yes, he'd need that. I didn't get from his post that he wanted the super cheapest option.
|
# ? Sep 19, 2013 01:15 |
|
skipdogg posted:So it turns out years and years ago when we first started virtualizing we bought 2 acceleration kits, so we have 2 licenses for vCenter. We have 2 main datacenters, one on East Coast, one on the West. vCenter currently lives on the west coast. Is it possible/recommended/feasible to setup some sort of failover vCenter in the east coast DC in case west coast drops for some reason? I'm going to dig through my books and google, but I figured what the hell, I'll ask here too. Honestly, I'd run a vCenter on the East Coast and one on the West Coast and put them in Linked Mode.
|
# ? Sep 19, 2013 02:35 |
|
skipdogg posted:What's not working? RD Gateway should only need 443 incoming, which everyone should allow out. Was going to say this. Remote Desktop Gateway encapsulates RDP traffic in SSL/TLS so you only need port 443 inbound: http://technet.microsoft.com/en-us/library/cc731150.aspx
|
# ? Sep 19, 2013 05:19 |
|
skipdogg posted:What's not working? RD Gateway should only need 443 incoming, which everyone should allow out. You'd think that, but some firms we visit have a weird layered proxy that is killing the connection. It's bat-poo poo crazy and I am fighting with our management and theirs showing them every step of the way to anyone who will listen that this is their ridiculous config doing stupid things. Our management wants to throw money at the problem (and bill it back to the client) and while it's not a view I share personally, it's certainly a behavior that, on a Pavlovian basis, I want to reward. Dilbert, I lost plat on my last ban. Can you email me kj at stillabower dot com KennyG fucked around with this message at 12:21 on Sep 19, 2013 |
# ? Sep 19, 2013 12:18 |
|
KennyG posted:You'd think that, but some firms we visit have a weird layered proxy that is killing the connection. Skip the client's internet connection and use an aircard?
|
# ? Sep 20, 2013 03:06 |
|
echo465 posted:Skip the client's internet connection and use an aircard? I appreciate the brain storming, but it's multiple users at a site, many of whom are the clients. Sometimes we don't even go to the site and it's them (the custoemer) logging in remotely. air cards are just a no go.
|
# ? Sep 20, 2013 16:25 |
|
If your RDS config is setup right and you're still having issues the only thing I can think of is some kind of SSL VPN (Juniper is what we use) to launch the RDP session. Are these places double natting their internet connection? That can play havoc with some SSL stuff sometimes.
|
# ? Sep 20, 2013 16:28 |
|
Is there any way to demo/evaluate vCenter Server? I can't find anything on the VMware website at all.
|
# ? Sep 20, 2013 18:58 |
|
|
# ? May 20, 2024 00:36 |
|
IT Guy posted:Is there any way to demo/evaluate vCenter Server? I can't find anything on the VMware website at all. https://www.vmware.com/try-vmware/ You will have to make a login.
|
# ? Sep 20, 2013 19:07 |