|
Misogynist posted:Update: I'm pretty sure I'm receiving prank phone calls from Veeam support even though (because?) I'm no longer a customer. I loved Veeam for the first month we used them. Now I am dealing with their support after we changed a backup repository (was a VMDK attached to the backup VM, now it is connected via MS iSCSI) and how add now fails. They seem to either know very little, or just really do not want to talk to me.
|
# ? May 21, 2012 21:34 |
|
|
# ? May 8, 2024 05:04 |
|
i'm running a laserfiche avante server (document management) under ESXi 4.1, currently it's the only server on a Dual CPU 10K SAS RAID 10 dell 610. Both it and SQL server express which runs the lf database sit on the same vm. There's certain areas involving writes to the SQL db that are absolutely atrocious performance wise. The LF support team says that it's because I'm running on ESXi, but it doesn't really make a lot of sense since the laser fiche server used to share the host with an sbs server which was using 10x the disk read/write bandwidth, and moving the SBS server to a new host made no performance difference on the laserfiche server. Personally I think it's the fact the laserfiche server is running SQL express and running into built in performance limits, but i don't really know how to prove that. One note for people looking at document image management software avoid the gently caress out of laser fiche. Oops I guess my question is: is it likely that that this is an ESXi issue?
|
# ? May 21, 2012 22:45 |
|
bob arctor posted:i'm running a laserfiche avante server (document management) under ESXi 4.1, currently it's the only server on a Dual CPU 10K SAS RAID 10 dell 610. Both it and SQL server express which runs the lf database sit on the same vm. There's certain areas involving writes to the SQL db that are absolutely atrocious performance wise. The LF support team says that it's because I'm running on ESXi, but it doesn't really make a lot of sense since the laser fiche server used to share the host with an sbs server which was using 10x the disk read/write bandwidth, and moving the SBS server to a new host made no performance difference on the laserfiche server. Personally I think it's the fact the laserfiche server is running SQL express and running into built in performance limits, but i don't really know how to prove that. SQL Server Express is limited to 1GB RAM usage, it's the major performance restriction remaining . If you hit that limit you swap directly to disk and your performance tanks. Can you move the database to a trial/demo of a full SQL instance and see if you experience the same issues? SQL Express Limits: 1CPU (4cores) 1GB RAM (SQL DB Engine) 10GB DB File size Reference: http://msdn.microsoft.com/en-us/library/cc645993%28v=SQL.110%29.aspx
|
# ? May 22, 2012 00:00 |
|
Nebulis01 posted:SQL Server Express is limited to 1GB RAM usage, it's the major performance restriction remaining . If you hit that limit you swap directly to disk and your performance tanks. Can you move the database to a trial/demo of a full SQL instance and see if you experience the same issues? Yeah, I'm going to try it though I have this horrid feeling that our laserfiche license is limited to sql express, so i'd be looking at an insane amount of money.
|
# ? May 22, 2012 03:51 |
|
Is there anything about doing GPU offload for vmware view? http://blogs.vmware.com/euc/2012/01/enhancing-graphics-processing-with-teradici-pcoip-host-cards-and-vmware-view.html I see this I am just wondering if anyone here has experiences with this.
|
# ? May 22, 2012 04:08 |
|
Corvettefisher posted:Is there anything about doing GPU offload for vmware view? I did a bunch of reading about it about 6 months ago, seems pretty awesome if your users require any kind of multimedia experience on their VMs. The technology is still very new though, so I wasn't finding tons of stuff on it then, outside of white papers that Teradici was putting out.
|
# ? May 22, 2012 04:22 |
|
Moey posted:I did a bunch of reading about it about 6 months ago, seems pretty awesome if your users require any kind of multimedia experience on their VMs. The technology is still very new though, so I wasn't finding tons of stuff on it then, outside of white papers that Teradici was putting out. Lot's of people like to watch flash youtube videos, since flash is GPU accelerated now I was wondering if I could split some of the processing power. Since our Zero Clients are 1920x1200, most people go full 1080p on them and then I get 5 tickets about "slowness" I am currently trying to fix our storage, the fact networking is completely backwards, multiple VM's have out of date view clients/vHW/VMtools and juggle the fact 4 hosts are about to expire in 2 days. Some people also do some network building programs which are also GPU accelerated. But if I can keep the tickets coming in of "PC SLOW" it will help Dilbert As FUCK fucked around with this message at 04:41 on May 22, 2012 |
# ? May 22, 2012 04:32 |
|
Guys I think my home lab is appropriately epic domain: ironthrone.pri VMs - - bronn (pfsense) - hodor (DC) - daenerys (apps/vcenter) - varys (win7, the 'user' box) ESXi -baratheon - joffrey (DC) - renly -stark - arya (apps) - sansa - eddard -lannister - tywin - cersei - tyrion The unlabeled ones exist and are on the network but I'm not sure for what yet. Once I get more comfortable with vSwitches, or even GNS3, I'm going to segregate Lannister into a second site and make Tywin a RODC etc. Management and production networks separate, and I had storage with openfiler's for each ESXi but I want to get more realistic so I deleted them all and will build 2 tomorrow which service all the VMs Just thought I'd share this since I just spent 30 slamming my head against a wall when vCenter wouldn't inventory anything by hostname because I'd forgotten that the domain wasn't westeros.pri but ironthrone.pri
|
# ? May 22, 2012 06:09 |
|
MC Fruit Stripe posted:- eddard Would this be a headless server?
|
# ? May 22, 2012 06:32 |
|
Something in this lab needs to be named doggiestyle before I get the reference
|
# ? May 22, 2012 06:39 |
|
How is the command history handled in esxi5's shell? There's no history command, but you can up-arrow through the history so it's being stored somewhere...
|
# ? May 22, 2012 13:39 |
|
Mierdaan posted:How is the command history handled in esxi5's shell? There's no history command, but you can up-arrow through the history so it's being stored somewhere... ESXi uses busybox, and thus utilizes the ash shell. There is obviously a history kept, but only in memory. You can search it in vi-mode. Busybox decided to not include the history built-in with their version of ash. See http://communities.vmware.com/message/1601787
|
# ? May 22, 2012 14:02 |
|
Yeah, I'd found that communities thread but there wasn't anything authoritative in there. If it's only stored in memory, that answer my question - thanks Complex.
|
# ? May 22, 2012 14:07 |
|
Nebulis01 posted:SQL Server Express is limited to 1GB RAM usage, it's the major performance restriction remaining . If you hit that limit you swap directly to disk and your performance tanks. Can you move the database to a trial/demo of a full SQL instance and see if you experience the same issues? Combine that with the WEB EDITION of Windows 2003 server that's limited to 2GB of RAM and you've got a recipe for success.
|
# ? May 22, 2012 14:32 |
|
Corvettefisher posted:Is there anything about doing GPU offload for vmware view? Do you have an e-mail or PMs so I can talk to you about this?
|
# ? May 22, 2012 16:25 |
|
DevNull posted:Do you have an e-mail or PMs so I can talk to you about this? Corvettefish3r@gmail.com Also Que 2 all nighters in a row to fix an enviroment
|
# ? May 22, 2012 17:09 |
|
Corvettefisher posted:Corvettefish3r@gmail.com I'd rather get it done, and know it'll save time and hassle vs not doing it. Also, my company has a time bank, so I'd get the hours back.
|
# ? May 22, 2012 19:19 |
|
EoRaptor posted:I'd rather get it done, and know it'll save time and hassle vs not doing it. I am working non stop here and will be working at home, fyi. Just got Jumbo frames on the network, much snappier now Dilbert As FUCK fucked around with this message at 20:03 on May 22, 2012 |
# ? May 22, 2012 19:54 |
|
Corvettefisher posted:Just got Jumbo frames on the network, much snappier now Is that really showing a big performance difference? I thought from previous reading it was only like a 10-20% gain in most case. For some reasons my boss keeps claiming it will be a 9 to 1 performance difference.
|
# ? May 22, 2012 20:17 |
|
Moey posted:Is that really showing a big performance difference? I thought from previous reading it was only like a 10-20% gain in most case. For some reasons my boss keeps claiming it will be a 9 to 1 performance difference. For NFS my latency dropped drastically, throughput is up, and vMotions are cut down(I am seeing a tleast 20%), I think PCoIP is performing better too. Frame size from 1500 => 9000
|
# ? May 22, 2012 20:24 |
|
Corvettefisher posted:I think PCoIP is performing better too.
|
# ? May 22, 2012 20:26 |
|
Yeah just put it at 9000 on every hop and profit
|
# ? May 22, 2012 20:26 |
|
Jumbo frames are not always a slam dunk. A lot rides on traffic patterns and the type of hardware you have.
|
# ? May 22, 2012 20:34 |
|
evil_bunnY posted:What the gently caress Might be just I am happy for jumbo frames but things feel much smoother, seeing out ALL traffic from hosts are routed through a sole vDS(I didn't set that up), they do see somewhat smoother. Syano posted:Jumbo frames are not always a slam dunk. A lot rides on traffic patterns and the type of hardware you have. Our hardware is pretty high end cisco stuff.
|
# ? May 22, 2012 20:45 |
|
Good to know this is still a bug in vCenter 5 + ESXi 5. Yay 4-year-old bugs!
|
# ? May 22, 2012 21:43 |
|
From both our licensing guy and VMWare themselves: the only thing my company can do to is just go ahead and buy vCenter Server Foundation. They can't swap us to an essentials plan, and buying that outright would be more than just buying vCenter. So it sounds like that's what we'll be doing. Which brings me to my next line of questioning. I can install vCenter directly on a VM using some pre-setup appliance thing. Is there any downside to doing that? With HA/Backups/vMotion and all of that, if the blade the vCenter is on dies, does that mean we lose all those features giving us only HA on one of the blades? Also with HA/Backups/vMotion, if we have two blades with 16 GB of memory and 10 in use, what happens when a blade goes down? Do we have to set it up so only certain VMs stay on, or can we actually go above the capacity without worrying about much while we fix our hardware? I have no idea how this stuff works, but I want a better understanding before I get started on another new install. If there's good tutorials for a newbie somewhere I'd love to read them.
|
# ? May 22, 2012 21:52 |
|
Frozen-Solid posted:From both our licensing guy and VMWare themselves: the only thing my company can do to is just go ahead and buy vCenter Server Foundation. They can't swap us to an essentials plan, and buying that outright would be more than just buying vCenter. Your standard licensing will give the hosts that are licensed these features http://www.vmware.com/products/vsphere/buy/editions_comparison.html You license your hosts, and assuming you have shared storage you will have access to those features When a blade goes down HA will try to roll over vm's that it has resources for For example Blade A - 10/16GB in use VM-A 5GB ram VM-B 2GB ram VM-C 2GB ram Blade B - 10/16GB in use If A goes down, it will only have ~6GB to restart(less with vm overhead and whatnot) VMs on, so depending how you configure HA it may start up B/C or just A. You can adjust which VM's have priority to start up in HA- Admission Control. HA will only restart what it can find resources for. If you have 2 hosts, running those hosts above 50% each will not result in a 100% successful restart of all VM's, you might want to look into adding ram. Please also note HA relies on shared storage, with out it VM's won't reboot, and don't get talked into VSA. If you know what you need to do but aren't sure how to do it GET THIS and GET THIS. Scott lowe's book is also really great but I actually think the vSphere planning is better if you are new to VMware concepts, Scott is a great resource and really explains it will but Planning and implementing, has a better approach for new comers. It also has a CD with some training videos and whatnot, and some nice labs Dilbert As FUCK fucked around with this message at 23:07 on May 22, 2012 |
# ? May 22, 2012 22:56 |
|
CF answered most everything above, but unless I missed it he didn't mention the Linux-based vCenter Server Appliance. There's a few limitations to the appliance version vs. the installable Windows application version, such as no Linked mode vCenter and no IPv6, but you probably don't care about those. Really it's a question of if you want another Windows machine to admin, or a kinda black-box-like Linux appliance. Using vCenter on a day-to-day basis will be mostly the same either way. There's a bunch of YouTube videos showing how to install the virtual appliance if you want to wrap your head around how (not) complicated it is. CF also mentioned VSA offhandedly, and there he's referring to VMware's Virtual Storage appliance, i.e. a software solution for using local disks in your ESXi servers and presenting it as shared NFS storage. It works, but it's expensive and I don't really know what the use case is for it right now.
|
# ? May 22, 2012 23:38 |
|
Mierdaan posted:Good to know this is still a bug in vCenter 5 + ESXi 5. Yay 4-year-old bugs! Also were you using VMFS3 or VMFS5? And with VAAI atomic locking, or SCSI reservations? Or was it NFS? The underlying causes for "device or resource busy" actually can vary. Sometimes it's a COS process (for ESX, not ESXi), found via `lsof | grep <filename>`. Sometimes it's a corrupt VMFS heartbeat lock record. Sometimes it's a stale/runaway cartel/child world/process (would be found in VMkernel land, which can be tricky for ESX users. [Well-versed] ESXi users can use vsi shell). And finally sometimes it could be from the host agents or some third-party agents. Happens, but the causes vary even more. It all comes down to any of the following for block/VMFS storage: 1) A legitimate process opened the file with read or read-write locking. 2) A legitimate process (or cartel/child) had the file open with read or read-write locking but its parent is gone (VM outage, something else weird). 3) There is corruption of the Heartbeat region on VMFS. Happens from storage issues and storage array bugs, but there have been a handful of rare ESX/ESXi bugs. NFS, there's NFS-side FS issues, permissions, and .lock directories involved. A bit simpler. ESX doesn't control locking on NFS. Kachunkachunk fucked around with this message at 01:24 on May 23, 2012 |
# ? May 23, 2012 01:21 |
|
This is definitely a flat-out bug with expanding the virtual disk of a machine while deploying it from template. I replicated it in a vCenter/ESXi/VMFS 5.0 test lab during my Fast Track course with a VCAP watching (can get you access to my lab system tomorrow if you want). Simple steps to reproduction: 1) Create a VM, sourceVM1, with a (say) 20GB disk 2) Convert sourceVM1 to a template 3) Deploy a VM, newVM1, from the sourceVM1 template. Check the box for Edit Virtual Hardware when deploying from template 4) Up the hard disk size for newVM1 from 20GB to 25GB 5) Examine the /vmfs/volumes/{datastore}/newVM1/newVM1.vmx file A machine deployed from template without modifying the hard drive size will have a relative path name in the vmx file, pointing to the vmdk descriptor file for the disk. Something similar to (forgive me, not in front of my systems right now) code:
code:
The way around this is to swap the newVM1.vmdk and newVM1-flat.vmdk in newVM1's directory for the ones in the template's directory, and hack the vmx to point at the vmdk residing in the same directory as itself, and then power on the VM. Mierdaan fucked around with this message at 02:11 on May 23, 2012 |
# ? May 23, 2012 02:08 |
|
Ah okay! Thanks for explaining - pretty clear to me. So it's not quite a locking issue in itself, but template customization looks completely broken there. Well, that's just disappointing. Especially if it hasn't been resolved since as early as June 2008. Do you know how many releases of VirtualCenter and vCenter Server have come out since then? PM me if you have a case number, please. That poo poo is bananas.
|
# ? May 23, 2012 02:31 |
|
The Communities thread I linked says he experienced it on 3.5, and I can reproduce on 5.0 so I'm guessing this is the 4th revision (3.5, 4.0, 4.1, 5.0?) with this particular bug. I don't have a case number because I assumed a 4-year-old bug would already have a million cases open for it - I mean, I actually hit this bug in production on 4.1 and then tried replicating for fun on vmeduc's 5.0 since my systems aren't there yet, someone else must have filed a bug, right!? Shame on me, I am part of the problem I will be a good user and open a well-explained bug report tomorrow and PM you the case number.
|
# ? May 23, 2012 02:42 |
|
Corvettefisher posted:Your standard licensing will give the hosts that are licensed these features HA, at least, is configured in vCenter, but actually runs directly on the hosts, so if the host your vCenter VM is running on dies, HA will still try to bring it up on another host. If you disable Admission Control, then all the VMs will power up on B regardless (excluding the case where you've set up sufficiently many reservations that it would refuse to power them all up on a single host anyway). It will then use Transparent Page Sharing, memory ballooning and (when necessary) swapping to disk, in order to fit everything in. With Admission Control enabled, things get a bit more fun. A 'slot size' is calculated for the cluster (for both CPU and RAM) based off the largest reservation of any powered on VMs (or, if no reservations, a default of 32MHz [was 256 prior to vSphere 5] and memory overhead). This is then used to calculate how many slots each host in the cluster has, and thus how many VMs are allowed to run, bearing in mind the configured failover capacity for the cluster. If Admission Control is properly configured (though with only two hosts, you're severely limiting yourself by doing so), every running VM will be powered up on another host in the even of a failure, that's kind of the whole point of it.
|
# ? May 23, 2012 03:06 |
|
Mierdaan posted:The Communities thread I linked says he experienced it on 3.5, and I can reproduce on 5.0 so I'm guessing this is the 4th revision (3.5, 4.0, 4.1, 5.0?) with this particular bug. I don't have a case number because I assumed a 4-year-old bug would already have a million cases open for it - I mean, I actually hit this bug in production on 4.1 and then tried replicating for fun on vmeduc's 5.0 since my systems aren't there yet, someone else must have filed a bug, right!? Chances are pretty good that a bug was filed quite some time ago, but if the KBs are lacking in acknowledging it publicly, I have a feeling it could still be slipping by each day. Thanks in advance.
|
# ? May 23, 2012 03:22 |
|
Apparently I can't detect rhetorical questions on the internet.
|
# ? May 23, 2012 03:27 |
|
I can say I've hit that bug myself. Again, the steps to reproduce are so simple, so common, I just assumed that it already had been reported. I was using ESX 4 and 4.1 with both vCenter 4 and 5.
|
# ? May 23, 2012 03:30 |
|
Ah okay, I think I am getting some of the best practices of how you should do it, with what you can do with it, mixed up here thanks! Need to touch up on my HA again anyways
|
# ? May 23, 2012 03:30 |
|
complex posted:I can say I've hit that bug myself. Again, the steps to reproduce are so simple, so common, I just assumed that it already had been reported. I was using ESX 4 and 4.1 with both vCenter 4 and 5. I think the checkbox for Edit Virtual Hardware stil says "(experimental)" next to it, so that probably contributes to the apathy towards filing bug reports.
|
# ? May 23, 2012 03:32 |
|
Mierdaan posted:I think the checkbox for Edit Virtual Hardware stil says "(experimental)" next to it, so that probably contributes to the apathy towards filing bug reports. Welp they closed my ticket already, saying it was an experimental feature anyway.
|
# ? May 23, 2012 19:58 |
|
|
# ? May 8, 2024 05:04 |
|
Apathy in regular Support as well, it seems. Like I mentioned in the PM to you, I will look into it tomorrow. I'm more interested in seeing if there is actually a bug filed, because I sure a poo poo know that in your case, the TSE didn't bother sourcing or filing one. Most of them really are too busy and have to prioritize other issues, but I don't really agree with the way it was handled. But I also operate on a team that's a lot less break-fixy and a bit more solutions-oriented. If it means anything, VMware's feature request folks are quite receptive and I have seen first-hand a product manager working directly with customers and inviting them for beta products, etc. Otherwise, as with any vendor, it depends on how loud you want to scream, really.
|
# ? May 24, 2012 05:42 |