Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Polaris_Echoes
Dec 6, 2009
Would you guys recommend this book?

The Official VCP5 Certification Guide

I'm going for the VCP5 and went through the Install, Configure, Manage course already. I'm torn between this one and the Scott Lowe book. Any opinions on which one would do a better job for preparing me for the test?

Adbot
ADBOT LOVES YOU

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Polaris_Echoes posted:

Would you guys recommend this book?

The Official VCP5 Certification Guide

I'm going for the VCP5 and went through the Install, Configure, Manage course already. I'm torn between this one and the Scott Lowe book. Any opinions on which one would do a better job for preparing me for the test?

I think this one is great to prepare you to take the test: VCP5 VMware Certified Professional on vSphere 5 Study Guide: Exam VCP-510. It basically goes through the blueprint.

I havent seen the latest Scott Lowe book, but his older one was awesome for content but not really for preparing for the test.

Polaris_Echoes
Dec 6, 2009

three posted:

I think this one is great to prepare you to take the test: VCP5 VMware Certified Professional on vSphere 5 Study Guide: Exam VCP-510. It basically goes through the blueprint.

I havent seen the latest Scott Lowe book, but his older one was awesome for content but not really for preparing for the test.

I saw on the VMWare community that a lot of people had recommended that book as well. I've only been doing this since November, and on the blueprint they recommended that you have worked with vSphere for six months. If I schedule my exam for mid/end April do you think that'll give me enough time to study/hands on experience? I took the practice exam this morning and got a 50% so that kind of kicked me into stressing out about it.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
That book is basically the Exam Blue Print(as it was at time of publication), and each chapter details the blue print. It's pretty good however realize the blueprint is updated pretty regularly, and you will want to check it out do some extra research on it.

What I did was read the Scott Lowe book download the blueprint and define the objectives.

E: I can't really speak for the Sybex 510 verison but the 410 was pretty crap in my opinion.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
The VCP5 Sybex is a new author, and I think reading it and doing the practice tests is a great way to prepare. The test isn't super easy, though, but it is true to the blueprint (which that book follows exactly).

thebigcow
Jan 3, 2001

Bully!

Docjowles posted:

My personal opinion is that if you're just going to dump everything onto one server running ESXi free, you shouldn't bother. Yeah you've decommed a bunch of crappy old servers, but you've also put all your eggs in one basket. If that machine dies, everything is down whereas before just the one service on that box would have been. Realistically you want some form of shared storage and one of the VMware Essentials kits (ideally Essentials Plus, but the basic kit is an option if VM downtime during maintenance/hardware failures is acceptable). like evil_bunnY says, you need vCenter Server to get all the cool features you think of when you hear "VMware".

Otherwise look at another solution, perhaps Ganeti with DRBD which can give you failover without shared storage, at the cost of higher I/O and bandwidth requirements. We use that for our non-production servers and it does OK. I don't love it but it's developed and used internally by Google so it's not a total hack job ;)

Doesn't that just put all your eggs in one san?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

thebigcow posted:

Doesn't that just put all your eggs in one san?
A decent SAN is built for high availability and doesn't have any one component that functions as a single point of failure.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
And those setups may also be more than stevewm is looking to spend on a handful of servers.

citywok
Sep 8, 2003
Born To Surf

Corvettefisher posted:

And those setups may also be more than stevewm is looking to spend on a handful of servers.

We just got a dual controller dell fc san (md3620f) with 4TB of disk space for 12k, granted we needed more I/O than capacity, so we bought lower density disks that cost a bit more per GB. It's not THAT expensive.

Demonachizer
Aug 7, 2004
I am trying to flesh out a test environment for a VMware project that will be 3 hosts connecting to a SAN. I am considering having the network guy setup vlans as follows and just want to know if it makes sense.

1 vlan for management network. 1 physical port per host (should this be private non-routable?)
1 vlan for vMotion. 1 physical port per host. Private/non routable
1 vlan for FT. 1 physical port per host. Private/non routable
1 vlan for guest network. 2 teamed physical ports per host. Firewalled/routable
1 vlan for iSCSI. 1 physical nic per host. Private/non routable

This seems like I am overshooting the needs for physical ports a bit. Can the vMotion/FT groups be combined to work on the same vlan and physical port? It is possible that we may end up putting in a second redundant switch down the road so we are looking at 12 physical ports in such an instance which seems high.

Also is the worst downside to not having a secondary active switch recovery time from a switch failure? We are comfortable with some downtime and data loss in case of a failure of a switch but we wouldn't want to have to reconfigure the entire environment or something. We have some spare switches for failures and will back up the config so we can use them in our main network or in this VM environment.

Demonachizer fucked around with this message at 16:20 on Jan 8, 2013

Kerpal
Jul 20, 2003

Well that's weird.
edit: nevermind I don't need this. luminalflux, thanks for the suggestion!
Anyone know about hot-adding CPUs in Debian Linux? I've got a development web server that I'm running reports and doing lots of testing on, but it's starting to get to the point where Apache can no longer serve requests. I have a Debian Linux machine running on ESXi 5.0 with 1 2.393 GHz core (Xeon E5620) and 512 memory allocated. We still have plenty of resources available but I figured upping the CPU and possibly memory may be the best solution.

We're using Kohana via PHP and MySQL in our development environment and everything is very fast and responsive when I'm not running anything. I've also noticed when running top that Apache, as I believe by default, runs multiple threads so I would expect it to take advantage of multiple cores.

There is a doc that covers this sort of at http://communities.vmware.com/docs/DOC-10493 but it applies to 4.0 and for Ubuntu (which I know is Debian based).

Any help is greatly appreciated.


Kerpal fucked around with this message at 21:37 on Jan 8, 2013

KS
Jun 10, 2003
Outrageous Lumpwad

demonachizer posted:

I am trying to flesh out a test environment for a VMware project that will be 3 hosts connecting to a SAN. I am considering having the network guy setup vlans as follows and just want to know if it makes sense.

I assume these are gigabit ports? If they're 10 gbit, you're overthinking it entirely.

Are you using FT for anything right now? It has severe restrictions and you should steer away from it if possible. That said, if you're actually going to use it (seriously, don't) it needs a dedicated connection. I do think a separate vmotion network is worth it on 1 gbit, but you wouldn't need to double that up if you went to redundant switches. You can also use a dedicated switch here if your DC layout allows it to save on ports on whatever switch you're using for guest traffic.

Management can be carried on the same NICs as your guest networks. It is low overhead and it has the added benefit of making it fault tolerant as well.

If you add in redundant switching, you'd want your guest/mgt traffic carried by a trunk with two members, one to each switch in a vpc pair. You can choose whether to do iscsi redundancy at layer 2 or 3, but layer 3 seems better on 1 gbit networks -- you'd want a connection from each host to each of two switches, and separate iscsi subnets on each.

Demonachizer
Aug 7, 2004

KS posted:

I assume these are gigabit ports? If they're 10 gbit, you're overthinking it entirely.

Are you using FT for anything right now? It has severe restrictions and you should steer away from it if possible. That said, if you're actually going to use it (seriously, don't) it needs a dedicated connection. I do think a separate vmotion network is worth it on 1 gbit, but you wouldn't need to double that up if you went to redundant switches. You can also use a dedicated switch here if your DC layout allows it to save on ports on whatever switch you're using for guest traffic.

Management can be carried on the same NICs as your guest networks. It is low overhead and it has the added benefit of making it fault tolerant as well.

If you add in redundant switching, you'd want your guest/mgt traffic carried by a trunk with two members, one to each switch in a vpc pair. You can choose whether to do iscsi redundancy at layer 2 or 3, but layer 3 seems better on 1 gbit networks -- you'd want a connection from each host to each of two switches, and separate iscsi subnets on each.

They are all gigabit. I am only leaving in the FT just in case someone demands it. I think that it will quietly fade away because of all the stupid poo poo required to keep it going. I am of the mind that if we aren't willing to foot the bill for a dedicated secondary switch because downtime is ok, then why would we do things for FT.

Changing it to:

1 vlan for vMotion. 1 physical port per host. Private/non routable
1 vlan for FT. 1 physical port per host. Private/non routable *probably going to kill it in production*
1 vlan for guest network/management. 2 teamed physical ports per host. Firewalled/routable
1 vlan for iSCSI. 1 physical nic per host. Private/non routable

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

citywok posted:

We just got a dual controller dell fc san (md3620f) with 4TB of disk space for 12k, granted we needed more I/O than capacity, so we bought lower density disks that cost a bit more per GB. It's not THAT expensive.

I wouldn't really consider a dual controller single chassis a HA deploy. A fire of a PSU or controller would trash the box, then I again I could be over analyzing it a bit.

demonachizer posted:

I am trying to flesh out a test environment for a VMware project that will be 3 hosts connecting to a SAN. I am considering having the network guy setup vlans as follows and just want to know if it makes sense.

1 vlan for management network. 1 physical port per host (should this be private non-routable?)
1 vlan for vMotion. 1 physical port per host. Private/non routable
1 vlan for FT. 1 physical port per host. Private/non routable
1 vlan for guest network. 2 teamed physical ports per host. Firewalled/routable
1 vlan for iSCSI. 1 physical nic per host. Private/non routable

This seems like I am overshooting the needs for physical ports a bit. Can the vMotion/FT groups be combined to work on the same vlan and physical port? It is possible that we may end up putting in a second redundant switch down the road so we are looking at 12 physical ports in such an instance which seems high.

Also is the worst downside to not having a secondary active switch recovery time from a switch failure? We are comfortable with some downtime and data loss in case of a failure of a switch but we wouldn't want to have to reconfigure the entire environment or something. We have some spare switches for failures and will back up the config so we can use them in our main network or in this VM environment.

How many VM's?
What kind of VM's? SQL, web, VDI?

I pefer to use 10g for ISCSI and vMotion(cisco 4500-x is a good deal), 1G for everything else, and have yet to see someone request FT.


I wouldn't mix FT and vMotion, as vMotion is going to try and use EVERYTHING on that port, FT requires a high, low latent network as processing requests have to be replicated to a second server. If you start vMotioning FT VM's could start erroring out.

The worst thing is a company is dead in the water for +4hrs and losing money each minute, ISCSI may throw a fit, and VMDK's may have a unclosed lock(rare) which may require a SP reboot. If that is acceptable then by all means
Not to mention you could put a great deal of load on the switch during maintance and HA events. I would strongly recommend 2 switches for failover and load balancing.

Dilbert As FUCK fucked around with this message at 17:25 on Jan 8, 2013

luminalflux
May 27, 2005



Kerpal posted:

Anyone know about hot-adding CPUs in Debian Linux?
This is what I do on CentOS:

Hot-add the vCPUs.
Run dmesg to see that they've been discovered.
Run # echo 1 > /sys/devices/system/cpu/cpu1/online to enable the cpu

Hot-adding CPUs is no problem. Hot-adding memory from under 4GB to over might be an issue since bounce buffers are disabled in the IOMMU by default if under 4GB is probed on boot.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Speaking of hot-adding, what's the process for hot-growing a disk (SAN, VMDK, whatever) and having the kernel actually detect and use the new capacity? Rescanning the SCSI bus doesn't work like it does with hot-added disks.

Demonachizer
Aug 7, 2004

Corvettefisher posted:

How many VM's?
What kind of VM's? SQL, web, VDI?

I pefer to use 10g for ISCSI and vMotion(cisco 4500-x is a good deal), 1G for everything else, and have yet to see someone request FT.

I wouldn't mix FT and vMotion, as vMotion is going to try and use EVERYTHING on that port, FT requires a high, low latent network as processing requests have to be replicated to a second server. If you start vMotioning FT VM's could start erroring out.

The worst thing is a company is dead in the water for +4hrs and losing money each minute, ISCSI may throw a fit, and VMDK's may have a unclosed lock(rare) which may require a SP reboot. If that is acceptable then by all means
Not to mention you could put a great deal of load on the switch during maintance and HA events. I would strongly recommend 2 switches for failover and load balancing.

I think the total number of VMs is around 10-15 max on the 3 hosts with the current number of replaced servers sitting at about 7. It is way overbuilt for current needs hardware wise. No DBs those will still be on separate physical machines and we aren't even close to talking about VDI in any real sense yet. Possibly a web application front end, print server, various file servers that may be consolidated if I can force the right people into line and a couple low resource application servers.

I would have liked 10ge but at this point it is not an option as the SAN has 1ge controllers. There were a lot of decisions made prior to my involvement with this project that were not great but I sort of have to just get something up and running from it as it is 1.5 years since initial purchases. I am studying for my VCP and will take the course in March, two weeks after this goes live :).

Do you think that 4 hours is the downtime for a switch failure give or take? We were figuring around two hours to get the new switch in, flash config, and restart farm from most important to least.

Demonachizer fucked around with this message at 18:25 on Jan 8, 2013

luminalflux
May 27, 2005



partprobe will pick up changed size of the device. Then its just a matter of either adding a new LVM physical volume to the VG or editing the partition table with fdisk to add more space at the end.

thebigcow
Jan 3, 2001

Bully!
nvm

citywok
Sep 8, 2003
Born To Surf

Corvettefisher posted:

I wouldn't really consider a dual controller single chassis a HA deploy. A fire of a PSU or controller would trash the box, then I again I could be over analyzing it a bit.

Agreed. Granted, we have a Compellent system that will not boot without both controllers (separate 3U chassis) being up, that's GREAT redundancy right there isn't it?

You can only expect so much out of a system that cost $12k. We're not an E911 center so it doesn't make sense to pay $100k for a system that is "fully redundant" like our older compellent... lol.

We could get a second shelf and use LUN mirroring to handle the potential full on crash, but that wouldn't be HA :-\

movax
Aug 30, 2008

So I am planning on co-locating a box. It's a HP Proliant ML110 G7 that's got a quad-core Sandy Xeon (4C/4T), 32GB of RAM (soon), 4x2TB drives, 2x256GB SSDs and 2x80GB SSDs. I want to virtualize for sure: data storage VM (running on one of the smaller SSDs + probably passing through a HBA and the 4 2TB disks since I have Vt-d) and then 1 or 2 VMs that'll run on the 2x256GB pool of SSDs that host the actual web application.

ESXi seems good to me (everything but the second NIC is on the HCL I believe), but this box is going out on the big bad internet and not behind my safe little home router. What are the security best practices for deploying a ESXi box without a hardware firewall? Don't do it? Obviously my host VMs will have their software firewalls up (the data storage VM shouldn't be directly internet accessible anyway), but what about ESXi management?

The other opinions I've been getting mentioning running Linux + KVM or something as the hypervisor, but I'm more inclined towards ESXi.

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE
ESXi does have a built in firewall which would probably be sufficient as long as you lock it down. Is there no way you can get a cheap firewall anyways? Maybe an ASA5505 or similar?

movax
Aug 30, 2008

Nitr0 posted:

ESXi does have a built in firewall which would probably be sufficient as long as you lock it down. Is there no way you can get a cheap firewall anyways? Maybe an ASA5505 or similar?

It's basically a mid-tower colocation special, so if it's something I could ghetto shoehorn into the case, that could probably work. An ASA5505 is kind of pricey for me though. I'd suggest doing a little SW firewall VM like monowall or pfsense, but then I run into a chicken/egg problem protecting the hypervisor don't I?

I guess comedy option I could buy a MikroTik or something and place it in the 5.25" bay :v:

luminalflux
May 27, 2005



Soekris and duct tape.

Erwin
Feb 17, 2006

Monowall should be fine. Have the colo only connect one NIC to their network/internet drop. Make sure no VMKernel ports are set up on that NIC, and make a VM Network that only the Monowall VM is on. The second (inside) Monowall NIC can be on another VM network with all the other VMs. Proper routing and VPN connections on the Monowall, or a Windows VM you can remote into, can allow management of ESXi.

Since it's a "mid-tower colocation special" I assume that means you don't have the option of out of band management anyway. The only chicken and egg scenario happens when ESXi dies enough to bring down Monowall or the management VM. Hopefully your colo could lend a hand in that situation.

IOwnCalculus
Apr 2, 2003





I like that last option most of all, only because it's the kind of not-quite-awful hack I would probably try to do.

It seems like the software firewall option would be theoretically possible, since it looks like you can create a management network that isn't associated with an actual physical adapter. Probably would be a mess to fix if for some reason the software firewall didn't start up, though.

luminalflux
May 27, 2005



Erwin posted:

Since it's a "mid-tower colocation special" I assume that means you don't have the option of out of band management anyway. The only chicken and egg scenario happens when ESXi dies enough to bring down Monowall or the management VM. Hopefully your colo could lend a hand in that situation.

According the specs on hp.com, ML110 G7 has iLO. It might be shared with the 1st NIC port though, haven't really touched that series and it seems a dedicated iLO port is an option.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Corvettefisher posted:

I wouldn't really consider a dual controller single chassis a HA deploy. A fire of a PSU or controller would trash the box, then I again I could be over analyzing it a bit.
Really?

A fire is a disaster. a fire in your datacenter is going to kill a fuckload more than your SAN. The fire suppression system is going to kill all of the power, flood the room with some kind of fire suppression gas, and evacuate the entire building until the fire department clears reentry. You had better be prepared to activate your DR plan for any kind of fire in your datacenter.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

demonachizer posted:

I think the total number of VMs is around 10-15 max on the 3 hosts with the current number of replaced servers sitting at about 7. It is way overbuilt for current needs hardware wise. No DBs those will still be on separate physical machines and we aren't even close to talking about VDI in any real sense yet. Possibly a web application front end, print server, various file servers that may be consolidated if I can force the right people into line and a couple low resource application servers.

I would have liked 10ge but at this point it is not an option as the SAN has 1ge controllers. There were a lot of decisions made prior to my involvement with this project that were not great but I sort of have to just get something up and running from it as it is 1.5 years since initial purchases. I am studying for my VCP and will take the course in March, two weeks after this goes live :).

Do you think that 4 hours is the downtime for a switch failure give or take? We were figuring around two hours to get the new switch in, flash config, and restart farm from most important to least.

Yeah 10G/e would be a waste at this point, didn't realize the environment was that small. How over provisioned are the servers? It is great to over estimate the hardware for those "Oh we forgot this is going there" and growth in the future, however if they are so over provisioned that they will be obsolete before you hit the 75% utilization of any resource that is another. Weigh the expected weight of growth with the amount of over provision.

4Hrs is an estimate, if you have hardware on hand that is different(and why not configure it for HA), if you have a 24x7x4HR tech service that means it could be Ticket request + Tech onsite + Tech Tshoot + flash config reboot required systems. You'll need to look at what the business is wanting those servers to do and how wide spread the outage of a switch would be, such as users, productivity loss, and customers, effected by the outage. Generally this will be much more than a 2960/3750.

movax posted:

So I am planning on co-locating a box. It's a HP Proliant ML110 G7 that's got a quad-core Sandy Xeon (4C/4T), 32GB of RAM (soon), 4x2TB drives, 2x256GB SSDs and 2x80GB SSDs. I want to virtualize for sure: data storage VM (running on one of the smaller SSDs + probably passing through a HBA and the 4 2TB disks since I have Vt-d) and then 1 or 2 VMs that'll run on the 2x256GB pool of SSDs that host the actual web application.

ESXi seems good to me (everything but the second NIC is on the HCL I believe), but this box is going out on the big bad internet and not behind my safe little home router. What are the security best practices for deploying a ESXi box without a hardware firewall? Don't do it? Obviously my host VMs will have their software firewalls up (the data storage VM shouldn't be directly internet accessible anyway), but what about ESXi management?

The other opinions I've been getting mentioning running Linux + KVM or something as the hypervisor, but I'm more inclined towards ESXi.

ESXi has a firewall, you can do Virtal Firewall appliances, and virtual networking which shape the traffic do exist.

Basically what you'll need to do is this
1. Create 2 VSS's, VSS 0 and VSS 1
2. Give you VSS 0 ONLY the up links needed to access the internet
2. Give pFsense 2 nics, each vNic attached to each VSS 1 and VSS 0
3. Set up routing in pFsense
4. On VSS 1 attach all your WebServer's

For management assign the Management VMkernal to a vss that is internal only.

I can look up some other software/hardware firewalls if tell me where the price point is.

E: Not a huge fan of virtualizing IO, but good luck!

I can send you some docs/configs if you want to look into it more as well

adorai posted:

Really?

A fire is a disaster. a fire in your datacenter is going to kill a fuckload more than your SAN. The fire suppression system is going to kill all of the power, flood the room with some kind of fire suppression gas, and evacuate the entire building until the fire department clears reentry. You had better be prepared to activate your DR plan for any kind of fire in your datacenter.

PSU's can blow and take the box without causing wide spread fire or smoke, yet leave the rack and data center unharmed. Rails can break. Saying "FIRE" in my example may have been a bit extreme granted.

Dilbert As FUCK fucked around with this message at 00:39 on Jan 9, 2013

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So vCenter and it's database, how many vms should it be split across? Right now it's going to be 4 hosts, but it will probably grow to at least 10, with a lot of vms. I know it will be too big for SQL Express, so we'll be using full SQL. Should I split them across 2 VMs? Or is a single dual or tri core VM sufficient? And what about the SSO component. Can that live on its own machine, or can it share with vCenter and SQL?

Syano
Jul 13, 2005

Corvettefisher posted:


PSU's can blow and take the box without causing wide spread fire or smoke, yet leave the rack and data center unharmed. Rails can break. Saying "FIRE" in my example may have been a bit extreme granted.

Sure it could happen. But does it happen frequently? Using that logic, having a clustered solution wouldnt be HA either cause we could keep following that line of thinking: a psu could catch fire and not catch anything else on fire, but then it could also catch the rack it was in on fire and it could catch the next rack on fire and it could... ad infinitum. The truth is that a dual controller/dual psu SAN in the vein of a powervault 3200i or an equallogic kit is adequate high availability for pretty much anyone who is looking in that price realm

thebigcow
Jan 3, 2001

Bully!

movax posted:

I guess comedy option I could buy a MikroTik or something and place it in the 5.25" bay :v:

An RB751 would fit nicely, although when its power supply dies you'd be in some trouble.

Frozen Peach
Aug 25, 2004

garbage man from a garbage can
Help me understand VMware Hosts/Guests and their memory settings. We're planning to add a new Exchange server, and I want to make sure we have enough memory allocated to the VM. We're also probably going to add more memory to those blades if it's not too expensive, but I don't want to make my boss order more memory if we don't actually need it.

What I know:

* Our VM Host has 16 GB of total physical memory.
* Each VM has a hardware tab with a set amount of memory
* Each VM has a resource reservation for memory
* Each VM has a resource limit for memory

I have no idea why it was setup this way, I'm just starting to learn this poo poo now.

Hilariously, we have 2 VMs set for 3 GB hardware memory, a 1 GB reservation, and a 1 GB Limit. There's another VM with 1 GB of memory, 512 mb reserved, and 1 GB limit. Another has 1 GB hardware, reserved, and limited. Another has 5 GB Hardware, 1 GB reservation, and 4 GB limit. Lastly we have one with 2 GB of hardware, 1 GB reservation, and 1.5 GB limit.

Looking at the VM Host's Performance graphs, we have 1.7 GB active, 1.2 GB swap used, 1.1 GB shared common, 10.5 GB consumed, 12 GB granted, and 1.9 GB balloon.

If I'm reading all this right, we've basically reserved way more memory for all these VMs than they actually use. Also, some of our VMs will never use as much memory as Windows thinks it can use. All of this sounds relatively awful to me.

What is the best practices for hardware/reservation/limit? Is there anything wrong with setting the reservation to 0 on everything, and the limit to the hardware limit? Will that shrink our current consumed memory? It sounds to me like we could get away with only actually having 4-5 GB consumed, but I could be wrong here.

Frozen Peach fucked around with this message at 23:25 on Jan 9, 2013

Mierdaan
Sep 14, 2004

Pillbug
Best practices is to not use reservations and limits unless you really understand what you're doing.

Frozen Peach
Aug 25, 2004

garbage man from a garbage can

Mierdaan posted:

Best practices is to not use reservations and limits unless you really understand what you're doing.

By "not use" do you mean to set the reservation to 0 and limit to max for that VM? Or is there some magic "Let VMware decide everything and be awesome" option that I'm missing?

Mierdaan
Sep 14, 2004

Pillbug

Frozen-Solid posted:

By "not use" do you mean to set the reservation to 0 and limit to max for that VM? Or is there some magic "Let VMware decide everything and be awesome" option that I'm missing?

Right, set the reservation to 0 and check the "unlimited" box to clear the memory limit. I'd slowly undo reservations and limits as long as you have a good understanding of your workloads.

Reservations and limits are a really easy way to shoot yourself in the foot. For example, with memory reservations, you say you have a machine that has 1GB hardware/reserve/limit. This means
  • The VM won't power on unless there's 1GB memory available (also won't restart during an HA event), even if the VM uses less than 1GB of memory under normal conditions.
  • Once the VM accesses its 1GB memory the first time, the hypervisor will never reclaim those pages - even if the VM isn't using them. Windows VMs typically touch all their memory pages on boot, so your reservation basically carves out that 1GB and tells ESXi it's never getting it back - even if the VM's completely idle.

If you have a mix of production and non-production VMs, your best bet is to use resources pools with shares defined; this makes sure that in periods of resource contention, your non-prod VMs won't starve your prod VMs, which is probably what your previous virtualization guy was trying to accomplish.

Frozen Peach
Aug 25, 2004

garbage man from a garbage can

Mierdaan posted:

Right, set the reservation to 0 and check the "unlimited" box to clear the memory limit. I'd slowly undo reservations and limits as long as you have a good understanding of your workloads.

If I'm doing my math right, the most we could possibly use assuming every VM used it's maximum guest hardware memory, is 15 GB. Since our active is only 1.7 GB, does that mean with no reservations I should see only that much consumed? All of the VMs on that host are production VMs, and our work loads are pretty consistent.

Can I change reservations and limits without shutting down the VM? Is that safe?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
If you have host swapping you should really, really look into getting more ram as ram is pretty dirt cheap. Host swapping is REALLY REALLY BAD and you don't want it, buying some extra ram probably wouldn't be a bad idea.

You can change reservations when the VM's are up, however the reservation will not be released until the VM is rebooted.

What are your VM's utilizing from within the guest OS?

E: Also look up resource pools
http://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.ResourcePool.html

Dilbert As FUCK fucked around with this message at 04:45 on Jan 10, 2013

Frozen Peach
Aug 25, 2004

garbage man from a garbage can
We're definitely looking into getting more memory. I have to crack open that blade this weekend and find out if we have all 8 slots in use with 2GB each, or 4 with 8GB each, before we can order more. I'm hoping it's 4 with 8GB, because then we won't have to order all of it. Still, it's not as cheap as we'd like. Server memory is surprisingly expensive compared to buying poo poo for desktops. We're definitely looking into it though.

Right now I'm just really annoyed because we were talked into Exchange over Google Apps or Office 365 because "we have all the hardware already" and now after we've bought CALs we find out we don't have the hardware and need to order more RAM to comfortably fit it. Plus the group that talked us into it wants us to buy a whole new dedicated blade with 16GB of memory to itself instead of running it on the hardware we already have. So I'm really not sure where "we have all the hardware we need" came from anymore. </rant>

As for actual guest utilization in the OS, I'm not sure. I'll dig into that further tomorrow to find out. Isn't "Active" supposed to be on the performance graph supposed to show that? or am I totally misunderstanding how to read those graphs?

Adbot
ADBOT LOVES YOU

thebmw
May 13, 2004
Bing

Frozen-Solid posted:

I have to crack open that blade this weekend and find out if we have all 8 slots in use with 2GB each, or 4 with 8GB each, before we can order more.

In the vSphere client, take a look at the host's Hardware Status tab. Expand Memory and you can see which slots are occupied, and by what.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply