|
Viktor posted:I'm researching dumping XenServer as were up for renewal and moving over to a VMware vSphere essentials kit. I wanted to handle the whole data recovery with one shot. How is VDP in the essentials plus kit? Can I get away with using it to a backup NAS instead of Veeam? I've had success with VDP in my labs, it incorporates some of the Avamar features. I've heard many people complain about Veeam in the thread and would look into PHDvirtual or vRanger. I believe a few people here are using VDP in production, keep in mind it only supports upto 100VM's.
|
# ? Feb 4, 2013 19:46 |
|
|
# ? May 14, 2024 21:42 |
|
Corvettefisher posted:I've had success with VDP in my labs, it incorporates some of the Avamar features. I've heard many people complain about Veeam in the thread and would look into PHDvirtual or vRanger. I believe a few people here are using VDP in production, keep in mind it only supports upto 100VM's. It supports up to 1000 VMs, but only 100 VM per appliance.
|
# ? Feb 4, 2013 20:29 |
|
I've got ESX 4.1 clients, and the machines only reports CPU Ready as a time in milliseconds. According to this vmfaq page, I should just delete the milliseconds by 200 since the "period" of the graph is 200 milliseconds. Is that still the correct procedure?
|
# ? Feb 5, 2013 00:59 |
|
Mierdaan posted:I have iSCSI datastores with DAVG/cmd regularly spiking to >100ms for periods of a few seconds (with some rarer spikes up to 200-400ms), but because there's no errors in vmkernel.log VMware support doesn't view this as a problem. Latency on the array is pretty sane, 5-15ms all the time. Quoting myself from long ago, because this problem finally got resolved when a SAS card in our Compellent array finally poo poo the bed and caused a failover event. Copilot support said it'd been spamming the internal logs with errors constantly for at least the last three days, but my guess is that it was the problem all along. Why they'd never noticed it, in our many support sessions, is beyond me.
|
# ? Feb 5, 2013 17:31 |
|
I'm looking at setting up a small personal VDI network that can support maybe 5-6 VM's running Windows 7/8. I'd like each machine to have around 4-8gb of ram and 2-4 CPU cores each. From a hardware standpoint, what's a ballpark figure for how much it will cost me to build a server capable of running this VDI?
|
# ? Feb 5, 2013 19:59 |
|
Above Our Own posted:I'm looking at setting up a small personal VDI network that can support maybe 5-6 VM's running Windows 7/8. I'd like each machine to have around 4-8gb of ram and 2-4 CPU cores each. From a hardware standpoint, what's a ballpark figure for how much it will cost me to build a server capable of running this VDI? That's pretty open ended question right there. Do your users actually saturate +2 cores and +4Gb ram?Over provisioning can hamper performance more than help it. How much storage do the use, how fast do they need the storage to be? What do you plan to expand on? For what you are trying to do you might want to look into VDI-in-a-box
|
# ? Feb 5, 2013 20:12 |
|
Moey posted:This is the case. Everything (SAN and hosts) are all in the same 192.168.x.x range. So in the picture that I drew, the host has 4 paths to the SAN. If I pull the switch interconnect, that host will only have 2 paths. The point I am trying to get at is that with a /24 mask, for example, an IP of 192.168.1.3 will not be able to talk to 192.168.2.5 because the mask stops it. So do you really mean 192.168.x.x or 192.168.1.x? Because if it is the first one then you really should using the /16 mask.
|
# ? Feb 5, 2013 20:24 |
|
BangersInMyKnickers posted:The point I am trying to get at is that with a /24 mask, for example, an IP of 192.168.1.3 will not be able to talk to 192.168.2.5 because the mask stops it. So do you really mean 192.168.x.x or 192.168.1.x? Because if it is the first one then you really should using the /16 mask. Sorry. Everything on the storage network is all within the same 192.168.1.x range.
|
# ? Feb 5, 2013 20:28 |
|
Corvettefisher posted:That's pretty open ended question right there. Do your users actually saturate +2 cores and +4Gb ram?Over provisioning can hamper performance more than help it. How much storage do the use, how fast do they need the storage to be? What do you plan to expand on?
|
# ? Feb 5, 2013 20:44 |
|
Above Our Own posted:I was hoping to run my VDI on vSphere and no, the majority will not not saturate that much. (Although a few definitely will). I'm mainly interested in what kind of hardware people are running small personal/enthusiast VDI's from since I have extremely limited experience with building servers. You might want to take a look at some of the Dell T420's they will probably hit the price point you are looking for. You can use vSphere for your deployment however View does have some additional costs where Citrix, for this scenario, may be more cost effective. Unless however you are just pooling the desktops on the server and giving RDP access to them, which would probably be the cheapest way. For ballpark you would probably be fine with; A T420 with E5-2420, 24-32Gb ram, H310, RAID 10 15k 146G(windows 7 boot disks), Raid 5 7.2k NL-(1-2TB) or (500GB) for data However it would strongly depend on what your workloads and growth expected. E: http://myvirtualcloud.net/?page_id=1076 That is a ballpark calculator Dilbert As FUCK fucked around with this message at 21:28 on Feb 5, 2013 |
# ? Feb 5, 2013 21:00 |
|
You know what's dumb? When a production VM has an unimportant thin-provisioned disk attached to it on a datastore that fills up, and VMware suspends the VM. Yes, I get it, the OS won't be happy when it can't write to a disk that it thinks has space, but I'd rather that disk time out or just get disconnected than locking up the whole VM. Is there a way to avoid this besides thick provisioning the disk?
|
# ? Feb 5, 2013 21:25 |
|
Erwin posted:You know what's dumb? When a production VM has an unimportant thin-provisioned disk attached to it on a datastore that fills up, and VMware suspends the VM. Yes, I get it, the OS won't be happy when it can't write to a disk that it thinks has space, but I'd rather that disk time out or just get disconnected than locking up the whole VM. Does your license support Datastore Clustering/DRS? Data Store alarms should warn you of disks being nearly full, Storage DRS can be configured to help avoid such problems. However, this is really the draw back to thin provisions is that they can fill up your datastore unknowingly.
|
# ? Feb 5, 2013 21:31 |
|
Corvettefisher posted:Does your license support Datastore Clustering/DRS? Data Store alarms should warn you of disks being nearly full, Storage DRS can be configured to help avoid such problems. However, this is really the draw back to thin provisions is that they can fill up your datastore unknowingly. No, we're on enterprise. And yes, I'm aware that alarms are a thing. Ultimately I should have been paying attention, but I was just ranting about behavior that I don't think is optimal.
|
# ? Feb 5, 2013 21:36 |
|
Erwin posted:No, we're on enterprise. And yes, I'm aware that alarms are a thing. Ultimately I should have been paying attention, but I was just ranting about behavior that I don't think is optimal. To make it more noticeable you could set up alarms to email you when alerts and warnings detected, which would reduce the amount of overhead required to manage alarms that may not appear so quickly. Just a suggestion however.
|
# ? Feb 5, 2013 21:40 |
|
Corvettefisher posted:To make it more noticeable you could set up alarms to email you when alerts and warnings detected, which would reduce the amount of overhead required to manage alarms that may not appear so quickly. Just a suggestion however. I've already configured emails, that's not the point. You're not helping my impotent ranting.
|
# ? Feb 5, 2013 21:57 |
|
Erwin posted:I've already configured emails, that's not the point. You're not helping my impotent ranting.
|
# ? Feb 6, 2013 00:10 |
|
Since it accidentally got posted here before, Dell is now officially going private: http://content.dell.com/us/en/corp/d/secure/2013-02-04-michael-dell-silverlake-acquisition.aspx
|
# ? Feb 6, 2013 00:47 |
|
adorai posted:I recommend a decommissioned server and a hammer. Sledge or ball peen, either will be fulfulling.
|
# ? Feb 6, 2013 03:11 |
|
adorai posted:I recommend a decommissioned server and a hammer. Sledge or ball peen, either will be fulfulling. Counting down the days till we move our file storage onto VMs backed by the SAN so we as a staff can take our StorageTek NAS to a shooting range.
|
# ? Feb 6, 2013 03:19 |
|
FISHMANPET posted:Counting down the days till we move our file storage onto VMs backed by the SAN so we as a staff can take our StorageTek NAS to a shooting range. Did I just google you?
|
# ? Feb 6, 2013 08:15 |
|
Is there an easy way to check the block deltas for individual VMDKs or VMs on vSphere 4.1? Our storage guy is getting mad at me because our OS partition volume is generating a change delta (not growth) of 20% out of 850gb of deduped data every week or so and its annoying to replicate. He thinks it is being caused by the OS page files, but I an extremely suspicious of this because unless there is a huge amount of memory pressure causing hard faults the OS page file shouldn't be getting used much. Admittedly, our Exchange 2007 server isn't configured well and hard faults a lot because too much is being allocated to the store.exe process but that is getting fixed. And even with that in a worst-case it could only account for maybe 1/10 of the delta we are seeing. If I can't narrow in down in the console then I think the next best thing I can do is link up a performance counter to all my VMs to track write activity to the C: drives to get an idea of which systems are causing it and pursue from there.
|
# ? Feb 6, 2013 16:20 |
|
If you don't use something like Veeam that already takes snapshots, you could take a snapshot of each VM and see how big the delta file gets by the end of the week. Probably the better way would be to turn on CBT and use something like this: http://www.vmguru.com/articles/powershell/23-cbt-tracker-powershell-script-now-with-more-zombie. Again, this assumes CBT isn't on for other reasons.
|
# ? Feb 6, 2013 17:32 |
|
BangersInMyKnickers posted:Is there an easy way to check the block deltas for individual VMDKs or VMs on vSphere 4.1? Our storage guy is getting mad at me because our OS partition volume is generating a change delta (not growth) of 20% out of 850gb of deduped data every week or so and its annoying to replicate. He thinks it is being caused by the OS page files, but I an extremely suspicious of this because unless there is a huge amount of memory pressure causing hard faults the OS page file shouldn't be getting used much. Admittedly, our Exchange 2007 server isn't configured well and hard faults a lot because too much is being allocated to the store.exe process but that is getting fixed. And even with that in a worst-case it could only account for maybe 1/10 of the delta we are seeing. Someone gave me this same advice: turn on CBT (changed block tracking) for the suspect VMs, wait 24 hours, and then look at the size of the CBT files to see which VMs changed the most.
|
# ? Feb 6, 2013 17:34 |
|
madsushi posted:Someone gave me this same advice: turn on CBT (changed block tracking) for the suspect VMs, wait 24 hours, and then look at the size of the CBT files to see which VMs changed the most. The size of the CBT file is proportional to the size of the .vmdk and doesn't change.
|
# ? Feb 6, 2013 17:37 |
|
How big are these CBT files going to be, anyway? I might just do an overnight snapshot friday as a quick and dirty way and merge them back in saturday or sunday morning.
|
# ? Feb 6, 2013 17:43 |
|
Anyone using SCVMM 2012 run into this issue? I have one Hyper-V host that doesnt show the VMs on it in VMM, but shows fine in Hyper-V on the host itself. The host is 2008 R2 SP1 and it is configured the same as other hosts that are showing just fine in VMM. If I create a new VM on the host in VMM, it shows just fine. But VMM just cannot see the VMs already there. Any ideas? I've reinstalled the agent a few times, rebooted the host, the works. At this point I'm about to do a manual export of the VM and re-import them, but I'd rather avoid the downtime if I can just force the VM's to re-register or something in VMM.
|
# ? Feb 6, 2013 17:45 |
|
BangersInMyKnickers posted:How big are these CBT files going to be, anyway? I might just do an overnight snapshot friday as a quick and dirty way and merge them back in saturday or sunday morning. Looks like around 8MB for a 1TB vmdk. You actually have to analyze the -ctk file to figure out what's going on. The size means nothing (and they're tiny).
|
# ? Feb 6, 2013 17:48 |
|
Erwin posted:Looks like around 8MB for a 1TB vmdk. You actually have to analyze the -ctk file to figure out what's going on. The size means nothing (and they're tiny). That's fine, I just wanted to make sure I wasn't going to be creating a massive amount of storage growth. 8mb per VM is nothing.
|
# ? Feb 6, 2013 17:54 |
|
Oh cool, it's Backup Exec's endless creation and deletion of shadowfiles and snapshot vmdks causing it, Exchange is our only offender for excessive hard page faults and even that is a couple gigs of the delta out of 250.
|
# ? Feb 6, 2013 19:46 |
|
BangersInMyKnickers posted:Oh cool, it's Backup Exec's endless creation and deletion of shadowfiles and snapshot vmdks causing it, Exchange is our only offender for excessive hard page faults and even that is a couple gigs of the delta out of 250.
|
# ? Feb 6, 2013 22:59 |
|
evil_bunnY posted:On the one hand I feel sorry for you, on the other hand it's 2013 and you're still using BE. He's in educationland, I'm sure he has no choice.
|
# ? Feb 6, 2013 23:28 |
|
Mierdaan posted:He's in educationland, I'm sure he has no choice.
|
# ? Feb 6, 2013 23:30 |
|
evil_bunnY posted:On the one hand I feel sorry for you, on the other hand it's 2013 and you're still using BE. I'm not the backup guy, I'm washing my hands of this poo poo. Storage guy can scream at someone else about the dedupe and replication problems. I've told the storage guy and my boss a dozen times over the last five years that they should stop backing up directly to tape and use the secondary nas that we replicate to as a repository to then dump to tape to for disaster archival but heads are up asses still.
|
# ? Feb 6, 2013 23:53 |
|
BackupExec Pros: Cheap as poo poo, already installed Cons: Everything else
|
# ? Feb 6, 2013 23:54 |
|
Misogynist posted:We're in educationland and we're still using TSM Misogynist posted:$250,000 later evil_bunnY fucked around with this message at 02:04 on Feb 7, 2013 |
# ? Feb 7, 2013 01:55 |
|
evil_bunnY posted:TSM works at least.
|
# ? Feb 7, 2013 02:00 |
|
I got into a bit of an, uh, tussle with the other guys working on my production cluster and I was curious what the prevailing opinion is here: Virtual disks...thin or thick provision?
|
# ? Feb 7, 2013 02:24 |
|
Powdered Toast Man posted:I got into a bit of an, uh, tussle with the other guys working on my production cluster and I was curious what the prevailing opinion is here: The opposite setting of whatever you're doing on the array side.
|
# ? Feb 7, 2013 02:27 |
|
Powdered Toast Man posted:I got into a bit of an, uh, tussle with the other guys working on my production cluster and I was curious what the prevailing opinion is here: The answer is always 'it depends' but there's this: Erwin posted:You know what's dumb? When a production VM has an unimportant thin-provisioned disk attached to it on a datastore that fills up, and VMware suspends the VM. Yes, I get it, the OS won't be happy when it can't write to a disk that it thinks has space, but I'd rather that disk time out or just get disconnected than locking up the whole VM.
|
# ? Feb 7, 2013 03:05 |
|
|
# ? May 14, 2024 21:42 |
|
Erwin posted:The answer is always 'it depends' but there's this: Erwin posted:Ultimately I should have been paying attention Erwin posted:I do not automate key alerts in my IT infrastructure for no discernible reason
|
# ? Feb 7, 2013 03:11 |