Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
fatjoint
Sep 28, 2005
Fatjoint

Erwin posted:

Question about VMware View pricing. Is it literally only $190/desktop for existing vSphere customers as claimed on this page?: http://www.vmware.com/products/view/howtobuy.html

I figured I'd start looking at View towards the end of the year when I have time because I figured it had an initial buy-in of several thousand dollars at least. Is it really just linearly priced on a per-desktop basis?

That is the cost of licensing, but that's not covering the costs of your backend storage or servers to host the instances.

You'll want so many IOPs per instance, in order to have an environment that truly replaces the desktop... but here's where things can be quite costly. For our environment, we were looking around $150-180k for enough backend storage spindle counts to support around 20-30 IOPs per instance.

fatjoint fucked around with this message at 01:49 on Feb 28, 2012

Adbot
ADBOT LOVES YOU

fatjoint
Sep 28, 2005
Fatjoint

WarauInu posted:

Question on networking and bonds maybe I have something wrong. Setting up my first "production" Xenserver 6.0 and it will have ideally when done 6 nic's, with 3 bonds of 2 nic's a piece. My thoughts were split each bond across different cards, and split pairs on different switches so if a switch went down data would go through the other.

I am being told I should have each on the same switch or it won't work. Am I wrong, and if so how would I best gain my redundancy I am looking for?

[edit] I'm just not going to comment on this... The more I write the more I realize I should leave this to someone who knows what they're talking about there. Have a look see here. I know when attempting link aggregation at my locale, we ran into an issue because the aggregate was split across two switches and it didn't work the way we intended it.

http://en.wikipedia.org/wiki/Link_aggregation

The OP needs to include Duncan Epping's blog site - I've purchased every book that Duncan Epping has written, and he has a great blog that covers a lot of information for free at http://www.yellow-bricks.com/


fatjoint fucked around with this message at 01:17 on Feb 29, 2012

fatjoint
Sep 28, 2005
Fatjoint

Misogynist posted:

This is one of our issues that we do not have with our eval of PHD Virtual.

We have Backup Exec, and the thing I'd have to say about any product is they all have their issues.

We used to have extremely long exchange backups until one day I realized that while the system drive of the exchange server existed in a datastore that had plenty of space for snapshots, all of the datastore volumes were sized to the same size of the vmfs datastores.

This causes issues with change block tracking - specifically - it caused the ESX server to issue stuns to the virtual machine to accomodate the writing of data. I created all new luns for the storage groups and made sure when creating each of the vmdks to leave about 15GB of free space within the ESX datastore.

Went from 30+hrs of backup time to 10.

fatjoint
Sep 28, 2005
Fatjoint

Rhymenoserous posted:

Well I don't like it.

Also I only have it set to make suggestions, not do any active moving.

Click on one of the host servers in the cluster and click virtual machines to see which guests are actually running on the host.

There's very little reason to disable DRS automation, at its lowest level of automation, a guest will only be moved off a host if significant resources would be freed by doing so.

I've only been in one scenario where automation needed to disabled, that was in a stretched cluster - i.e. a cluster whose hosts are separated miles from each other.

fatjoint
Sep 28, 2005
Fatjoint

Beelzebubba9 posted:

So my question is how would this be effected by vMotion in case of a host failure? I assume ram states are persisted in case of the failover, and the worst case is there might be a brief loss of synchronicity between all of the parts of the database. Is there anything I should be worried about other than that?

If I understand the question correctly, you're misusing terminology here. Failover doesn't use vMotion - failover is a feature of HA (high availibility), or fault tolerance. vMotion is technology that allows you to move a virtual machine from one running host to another.

In the case of HA, when a host fails, vmware will boot the vm to another host if you're configured to (by default it does on shared storage). In the case of a HA failover, it's a cold boot, not a migration of memory contents; It's a crash consistent boot of a virtual machine whose host failed, and it isn't fool proof as crash consistent boots can have corruption.

However, in the case of Fault Tolerance, if a host fails the "lock-stepped" virtual machine takes over exactly where the other left off as it's a seemless translation, the contents of memory and disk writes are shadowed in the "lock-stepped" virtual machine.

fatjoint fucked around with this message at 08:29 on Mar 17, 2012

fatjoint
Sep 28, 2005
Fatjoint

Erwin posted:

So I'm starting to juggle data around so I can upgrade my datastores to VMFS5. Is there any reason I wouldn't want to combine, for instance, two 2TB LUNs/VMFSs/datastores that are on one RAID set into one 4TB LUN/VMFS/datastore?

edit: I'm storage vMotioning VMs off of datastores, deleting the datastores, and creating new ones, not necessarily because it's required, but I have the room so why not.

That really depends on the activity asked of it right? But let's assume the virtual machines you're thinking of combining into a single datastore don't do a whole lot of datastore transfers - VMware will tell you that you should plan a limit of around 20-25 virtual machines in a lun.


Just on a side note, because I'm so frickin happy with our new purchase, let me gush about it for a moment.

I can say that Hitachi's SAN kicks rear end. We just purchased an AMS2100 with x6 8GB FC ports, and unlike what I've had to do with our NetApp filer, you do not have to assign pathing! With Hitachi, you create your host group and associate the FC ports with the WWNs of the servers, and you're done - nothing more to it. Internally of the SAN, pathing is auto-balanced across the hosts, so finally there's a GOOD reason to use Round Robin balancing.

fatjoint fucked around with this message at 22:09 on Apr 6, 2012

fatjoint
Sep 28, 2005
Fatjoint

KS posted:

Never really expected to see anyone praise an AMS. We ditched an AMS2300 for Compellent and the UI was unresponsive and horrible, and IO performance was really terrible. Have they improved SMN2? It was taking me ~60 seconds from click on each link to page load.


If your storage does not have VAAI hardware locking you want to be careful about the number of VMs on each datastore. If you do have VAAI it is not a big deal.

I don't know what SMN1 was like, but SMN2 is essentially just a web server you install on a guest vm, and it gathers the info from the filer. Totally refreshing of all data can take ~10 seconds, but normal clicking and working is very snappy.

Even with VAAI "locking" (don't remember hearing this term before), all storage vendors and VMware have always said 20-25 vms per lun, and I'm wanting to say Scott Lowe states it as well in Mastering vSphere 5.

Hehe, whenever I say "Scott Lowe" I imagine those kids in the desert in beyond thunderdome talking about the Capt.



Pantology posted:

All storage vendors and VMware have always said "It depends," and when pressed for a number have said something low and safe. With VAAI and some help from array-side caching technologies, you can get absurdly high densities under the right conditions. EMC has claimed up to 512 VMs per LUN, NetApp was claiming up to 128 on vSphere 4.1--I'm sure that's higher by now.

You're right... In terms of a what a typical buildout would be, it's safe to say that it would be the 20-25 per LUN mark, but with higher end builds you can do a lot.

fatjoint fucked around with this message at 05:16 on Apr 8, 2012

Adbot
ADBOT LOVES YOU

fatjoint
Sep 28, 2005
Fatjoint

adorai posted:

I just went through this process for my company.

I started with a number in mind that I figured was palatable for me to spend, and framed the discussion around our DR site. The hardware was lacking if we were to have a real disaster. What I did was priced out a full refresh of our production site, and we will move our production hardware to the DR site. We got 2 years out of it in production, and will now get another 3 years out of it at our DR site.

Realistically, I know my business environment, and it's very easy to justify purchases when the FDIC suggests that you do so.*

Knowing nothing about your company, I would probably just try to make one of the sites the red headed stepchild, and get a 2 year refresh cycle, moving stuff from site a to site c now, and in two years refreshing site b and moving the gear that is there to site c, repeating this process every 4 years. So site C gets 4 year old gear for 2 years.

*On this note, does anyone have any suggestions on how to explain how VMware HA and clustering works to an IT examiner that clearly doesn't understand it? Last year one of the examiners told us that when we did DR testing, we had to bring each VM up on each cluster member to prove that the cluster member could run the VM. We decided we were going to ignore his request, but if he comes back I need to have a reasonable response to him, and I really don't even know where to begin with this guy.

It's not completely unreasonable, just put members of your cluster in HA into maintenance mode in a rolling fasion to show the vms migrating and existing on different nodes.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply