|
skipdogg posted:Our cluster is having major issues, been on the phone with VMWare support most of today, another guy is taking a crack at it right now. 2 hosts just show up disconnected from vsphere. The VM's are still running and accessible but VMWare support is telling us to manually shut down all VM's and hard reboot the hosts. It's a last resort option right now, there's 3 SQL servers on there among other servers.
|
# ¿ Feb 16, 2013 04:18 |
|
|
# ¿ May 20, 2024 10:02 |
|
Corvettefisher posted:Can't say I have ever seen that, but it sounds plausible. It wouldn't be the strangest thing I have heard of this week....
|
# ¿ Feb 16, 2013 04:50 |
|
Above Our Own posted:Noob question here: if I'm running a computer inside of Virtual Workstation, does that VM expose ANY information about the host machine? http://arstechnica.com/security/2012/11/crypto-keys-stolen-from-virtual-machine/
|
# ¿ Mar 6, 2013 01:16 |
|
whaam posted:Hearing mixed things on this but is it possible to attain 4Gb bandwidth between shared storage and a host using 4 1gb nics and LACP on stackable switches? My vendor says yes, some forums say no.
|
# ¿ Mar 14, 2013 00:50 |
|
whaam posted:We are using NFS, looks like we got some bad info from our vendor. Thats a raw deal because I doubt we will get the IO we need on 1Gbit.
|
# ¿ Mar 14, 2013 01:13 |
|
whaam posted:We have an IO heavy SQL application we are installing that according to the software company needs more than that kind of speed.
|
# ¿ Mar 14, 2013 01:28 |
|
ragzilla posted:You'd still get limited by the src/dst load balancing schemes unless the storage had multiple IPs and you had multiple LUNs. edit: I suppose you could create multiple VMDKs on multiple datastores which are mapped with different IPs, and raid them in the guest to get >1gbps via NFS, but that seems a little over the top.
|
# ¿ Mar 14, 2013 04:04 |
|
ragzilla posted:Moral of this story, just run 10GbE.
|
# ¿ Mar 14, 2013 05:42 |
|
whaam posted:The software vendor had us run the sqlio test and claimed we needed 7,000+ iops result from that, which is what they got in their controlled environment on RAID10, they didn't want us running RAID-DP at all initially because they claimed it was inferior. We ran the SQLIO test in a development environment and ended up breaking that IO requirement but only when we allocated more than 4Gb of bandwidth to the NFS storage (it was tested on 10GbE). I'm 100% sure their requirements are bullshit but if we don't play by their rules they will claim our storage is the problem the first time we run into an issue.
|
# ¿ Mar 14, 2013 23:11 |
|
whaam posted:I see what you mean, we had a specific size of write for the test, it was just to show that on a similar size write that RAID DP could pull the same IOPS and MB/s as RAID10 that they benchmarked with, the size of the test had no practical link to the size of the writes that the actual program will do.
|
# ¿ Mar 15, 2013 22:36 |
|
Walked posted:Right now, this is a pretty vague question, but: What SAN solutions [ideally under 50k] have you had good experiences with in Hyper-V environments? Just trying to see what has worked for others, and worked well. 6x High end single proc Cisco UCS rack mount servers each with 128GB of RAM: $60k 6x Procs of enterprise plus VMware licensing (not sure on exact price, I would guess around $20k including vcenter) 6x procs of Server '08 datacenter: $20k 2x Nexus 5k switches (layer 2 only): $35k 1x NetApp HA pair: $200k I put this information here so you can see that in our case, the storage is what we spent the most on, not the least. Nearly every performance issue we have had in our environment has been traced back to the storage in one way or another. It is never the network, and it is definitely not ever the servers. This is probably a consequence of the great visibility we have into CPU, RAM, and network utilization, and the OK at best visibility we have into our storage, but either way, storage is the most important thing in any new deployment, in my opinion.
|
# ¿ Mar 16, 2013 23:33 |
|
evil_bunnY posted:Hold me or I may die laughing.
|
# ¿ Mar 17, 2013 00:32 |
|
Do it from active directory restore mode
|
# ¿ Mar 26, 2013 23:02 |
|
List price might be more expensive, they are reasonably competitive with normal discounts.
|
# ¿ Apr 4, 2013 03:00 |
|
mAlfunkti0n posted:Currently they are seen as snapshots since the LUN id's no longer match the signature on disk. Ughhh.
|
# ¿ Apr 9, 2013 03:45 |
|
Corvettefisher posted:Just a question what do you all use for Backing up?
|
# ¿ May 1, 2013 04:12 |
|
Corvettefisher posted:How does a failed upgrade result in "a couple of days of downtime"... I mean seriously, HOW does that happen?
|
# ¿ May 18, 2013 04:55 |
|
Corvettefisher posted:
|
# ¿ May 21, 2013 02:39 |
|
FISHMANPET posted:I'm thinking that more than anything it's the latency that's killing me, right. This stuff all lives in the same rack connected by a single cheap unmanaged switch, I should be hoping for something around 10ms, right?
|
# ¿ May 23, 2013 06:11 |
|
warning posted:Troubleshooting an issue where hostd is constantly crashing and not allowing you to connect via management software kind of forced my hand. Its escalated to engineering with vmware support at this point, so it seems like it was indeed over my head after all and I don't need to feel so bad.
|
# ¿ May 31, 2013 05:38 |
|
or just use a vlan.
|
# ¿ Jun 1, 2013 02:27 |
|
anyone know how to solve the issue of getting prompted to reboot every time you fire up a virtual desktop with paravirtual drivers? I am not sure if it is the disk controller driver or the network, but as soon as we switch to SAS and e1k, the users stop getting prompted to reboot immediately upon login.
|
# ¿ Jun 1, 2013 21:15 |
|
Corvettefisher posted:Also any reason you are using the Paravirtual on VD's? While I did not build the image myself, I am quite certain it was fully installed before hand.
|
# ¿ Jun 1, 2013 23:08 |
|
evil_bunnY posted:If you still have to install them, they weren't there before.
|
# ¿ Jun 2, 2013 14:32 |
|
Erwin posted:Are you sure that's what's causing the reboot prompt? Changing EVC mode prompts for a reboot of the VM (at least it did in 4.1, I haven't changed it since upgrading to 5 and 5.1). Maybe you developed the master in a cluster with a different EVC mode than where you're deploying to? warning posted:Do you have this hotfix installed?
|
# ¿ Jun 2, 2013 22:06 |
|
Misogynist posted:Have you looked at Logstash? It's not quite Splunk, but it's very free. Thanks rear end in a top hat, now that I know this exists I have to implement it.
|
# ¿ Jun 15, 2013 15:05 |
|
three posted:I literally can't think of a single thing View can do that XenDesktop cannot.
|
# ¿ Jun 19, 2013 05:27 |
|
goobernoodles posted:I'll preface this by saying I kind of lucked into my job and I feel like I probably don't know a lot of things a person in my position should know. Please give us an idea of your current infrastructure so we can more effectively advise you.
|
# ¿ Jun 28, 2013 00:58 |
|
1) Get 2x HA pairs of Oracle Sun 7320 storage. 2) Get 4x (2x for each location) Cisco 3750x gigabit switches 3) License everything you need For #1, I went all out with ours and got roughly 15TB (usable) on each with tons of cache for $100k for both pairs. You could get less storage and less cache and probably end up somewhere around $80k For #2, I think you can probably do all four switches for well under $10k. They stack and allow you to do cross switch etherchannel links, giving you good enough speed and redundancy for your organizations needs. For #3, I can't exactly comment. I think you could do the entire project for under $100k. You would be using only gigabit Ethernet rather than 10gig Ethernet or FC, but you can replicate between the two.
|
# ¿ Jun 28, 2013 04:10 |
|
Mr Shiny Pants posted:How are these working out for you? Is the performance good? Have you tested HA on them? I am curious because I am a huge ZFS fan and some real world experience with Oracle/ Sun gear would be nice. They work great for our needs, which are storage for a little over 200 vdi sessions and 50 Citrix servers. I suspect that we can double the number of vdi sessions we host without seeing a performance hit. We did play around with HA early on and the takeover was quite fast. VMs hung for about 2 seconds but then picked right back up again. Honestly, for the price, Oracle should be murdering everyone else based on what I've seen.
|
# ¿ Jun 28, 2013 12:23 |
|
Mr Shiny Pants posted:Is this your primary storage? If not why not?
|
# ¿ Jun 28, 2013 21:14 |
|
if you want to just play around with other OS, virtualbox is the best solution imo. If you want hypervisor experience, Linux with KVM, Linux with Xen, or Windows 8 with Hyper-V will all be good platforms for a regularly used PC.
|
# ¿ Jul 10, 2013 04:07 |
|
Dilbert As gently caress posted:Kaspersky <3 I have found a new love Hahahhahahahahahah. We use Kaspersky and seriously, gently caress it.
|
# ¿ Jul 20, 2013 00:28 |
|
my home VMware lab is backed by 6x 7200 drives in raid-6. It's plenty fast for me. It's not like I am stressing multiple servers at once like one would see in a production environment.
|
# ¿ Jul 27, 2013 05:17 |
|
Hadlock posted:How responsive is virtualization to additional logical cores? If I am running Hyper-V under WS2012 R2 would I see an improvement in responsiveness in a VM lab setting if I went from a quad core i5 to an 8 (logical) core i7 haswell? I'm looking at running probably 4 VMs.
|
# ¿ Aug 5, 2013 01:27 |
|
FISHMANPET posted:Holy poo poo that's terrible.
|
# ¿ Aug 23, 2013 23:54 |
|
FISHMANPET posted:Relaxed coscheduling exists but I don't think it will be that relaxed... Look, I'm not saying it's a not a terrible idea, I'm just saying it's not necessarily going to kill the performance on the entire box.
|
# ¿ Aug 24, 2013 17:23 |
|
FISHMANPET posted:Alright, I guess I'm not sure exactly how relaxed coscheduling works. I thought it could just fudge the timings a bit, but could it do more? Is it as powerful as running a few instructions on 16 cores, and then running a few on the other 16, and going back and forth like that? You have 4 cores allocated, and one thread that needs to run. Without relaxed coscheduling: the hypervisor will wait until 4 cores are available, and they all get CPU time at once. With relaxed coscheduling: the hypervisor will run your thread, then give equal CPU to the other cores before it will let the VM execute again. If you have no contention, no big deal, you use some extra CPU time. If there is contention already, you just wasted an additonal 3x the CPU for that one thread to execute.
|
# ¿ Aug 24, 2013 21:14 |
|
with the new read cache, it looks like you assign it per VM. How will that work with shared snapshots in a VDI environment? Can I designate that all of the machines that use this one snapshot can share the cache?Mierdaan posted:vCPUs, not physical cores. For people who love to over-provision.
|
# ¿ Aug 26, 2013 23:22 |
|
|
# ¿ May 20, 2024 10:02 |
|
evol262 posted:Not to put too fine a point on it, but are you a Windows sysadmin? Creating a vlan-tagged bridge in raw KVM or Xen is exactly like doing it in RHEL. RHEV and oVirt provide wizards for this, as does hyper-v. I am an experienced VMware admin, and experienced network admin, and experienced storage admin, and an experienced Linux admin. Setting up openstack is a pain in the rear end and the documentation is in my opinion piss poor. It is very technically detailed and probably great for someone who only works with openstack or admins Linux servers all day. For someone who is accustomed to simply reading the documentation for the product I want to deploy and then following those instructions, it is not easy to setup openstack. The documentation presupposes a great deal of knowledge, and I had to do a significant amount of reading on other related projects before any of the openstack stack made sense to me. The point of jre's post is that for many shops, there is no loving way they could jump to openstack. I know my environment could not do it, because although I am sure I could set it up and migrate all of our infrastructure to it if I wanted to, none of the other admins that work on my team would be able to use any of it, and the level of Linux knowledge required makes it impossible for many of them to get up to speed on it. If openstack is really aiming for the VMware stack, including ESXi, vCenter, SRM, and the new storage features, they are going to do one of two things: 1) make it more point and click gui driven like VMware's stack is, or 2) improve the documentation to the point that it is easy to follow, even for a "windows admin". The reality is that the bulk of business's don't have dedicated Linux teams, they have a team of system administrators who have to support a fuckton of random poo poo, and they frankly don't have the time or across the board skill level to support openstack as it is today.
|
# ¿ Sep 2, 2013 00:37 |