|
No separate vMotion?
|
# ? Jul 3, 2012 19:26 |
|
|
# ? May 8, 2024 07:10 |
|
Mausi posted:No separate vMotion? I think Vmotion is getting lumped in with Management traffic, or FT, I don't remember which. I made the plan in my head 2 months ago, and then pretty quickly the boss' boss said "nope, gonna be years before we do that" and now today he's ready to start looking at it. So I gotta kinda scramble to remember what I did last time.
|
# ? Jul 3, 2012 19:35 |
|
FISHMANPET posted:I think Vmotion is getting lumped in with Management traffic, or FT, I don't remember which. I made the plan in my head 2 months ago, and then pretty quickly the boss' boss said "nope, gonna be years before we do that" and now today he's ready to start looking at it. So I gotta kinda scramble to remember what I did last time. Just speaking from experience and don't mean to sound preachy, but those types of projects are the worst and always bite you in the rear end. Double and triple check your plans. Before when you were researching it you probably weren't super careful because it was a pipe dream, but now you get given the green light and you may look back at your notes and think everything is perfect, and chances are you'd be wrong.
|
# ? Jul 3, 2012 19:41 |
|
As I went through it last time I was reading through Masterving VMware vSphere 5 and VMware vSphere Design (which, though it's for 4, the same concepts apply). It was much more of a possiblity back then so I was planning it for real, not just for funsies, but I'm going to be poring my material again to make sure I don't gently caress it up.
|
# ? Jul 3, 2012 19:48 |
|
Yeah you can do vMotion on Management seeing how management usually is idle or very low bandwidth traffic. Are you doing it like this (crappy ms paint drawing) Management Primary nic0, Fixed failover to standby adapter nic1 vMotion Primary nic1, Fixed Failover to standby adapter nic0 This eliminates and congestion while addressing failure scenarios.
|
# ? Jul 3, 2012 20:14 |
|
Also, looking at your whiteboard there, it implies that you're going to run 4x 10GbE connections to each host, which is ridiculous. Presumably you're running a single 10GbE to each host from each physical switch, then vLan segregating your traffic types? At which point carving off vMotion or anything else is an arbitrary task.
|
# ? Jul 3, 2012 20:27 |
|
Corvettefisher posted:Yeah you can do vMotion on Management seeing how management usually is idle or very low bandwidth traffic. Yeah, that'll probably be what we do on the ports where two services are sharing the same pair. Mausi posted:Also, looking at your whiteboard there, it implies that you're going to run 4x 10GbE connections to each host, which is ridiculous. Actually the plan has 8 10GBe to each host . Using Dell 8024F switches and Intel DP 10GB SFP+ NICs, the price per connection is around $600, which I don't think is bad at all. Also, as far as I know, there isn't a way to segregate bandwith on VLANs if I segment out the traffic on a 10GB link, though if that's changed that would be awesome.
|
# ? Jul 3, 2012 22:27 |
|
Unless you're going to be running some weird ethernet bound IO traffic or network heavy virtualisation (which I doubt) then you don't need any more than 2 connections per host. It depends on what you're going to be hosting, but you probably don't even need to worry about QoS either. For comparison, around 100 of my hosts (24cores x 144Gb) run everything via two 10GbE Copper out to a pairs of Cisco Nexus5k with minimal QoS involved. You just shouldn't need the extra cabling.
|
# ? Jul 3, 2012 22:36 |
|
FISHMANPET posted:Actually the plan has 8 10GBe to each host Holy hell. They're called configuration maximums, not configuration challenges. Mind telling us what your workload is like?
|
# ? Jul 4, 2012 00:15 |
|
FISHMANPET posted:Actually the plan has 8 10GBe to each host .
|
# ? Jul 4, 2012 00:28 |
|
Well apparently I just got super carried away, based on your reactions, so that's good to know.
|
# ? Jul 4, 2012 00:37 |
|
Since Intel released the E5-2400 series, Dell has also released the R320, R420, and R520. They're signifigantly cheaper than the equivalent E5-2600 models, and when I talked about the $10k per node cost of the R720 my boss was kind of shocked, so getting an equivalent R520 for 2/3rds the cost is certainly appealing. I know I'm looking at only one QPI with the E5-2400, but VMware is really good at not scheduling a VM on two separate physical CPUs, so that's not too big of a problem. It's tri channel memory rather than quad channel but I don't think that increase is worth the extra cost. To get the full advantage of triple channel memory I'd have to load up all 12 slots with 16GB sticks, which gives 50% more memory than I'm entitled too, but we'd have enough memory to bring down a machine for maintenance and not have to worry about it.
|
# ? Jul 4, 2012 21:54 |
|
FISHMANPET posted:Since Intel released the E5-2400 series, Dell has also released the R320, R420, and R520. They're signifigantly cheaper than the equivalent E5-2600 models, and when I talked about the $10k per node cost of the R720 my boss was kind of shocked, so getting an equivalent R520 for 2/3rds the cost is certainly appealing. This may sound odd but I think you are looking a bit too much into this. It is good you are going over everything with a tooth and nail, but weighing triple vs dual channel is a bit much. Sure any internal networking between virtual machines will be faster but really should focus more on CPU, Storage, and Network. Ram is important but not to the aspect you are going into it unless you are running high end scientific number crunching servers the triple channel vs dual won't show any noticeable performance hits/gains. I think you need to give us an idea of how many and what kind of VM's you are running.
|
# ? Jul 4, 2012 22:26 |
|
The office we use as a DR site is moving, and I'd like to take this opportunity to either colocate our equipment somewhere, or use some sort of hosted VMware infrastructure. Our needs are very small - physically it's 1 host and 1 disk array of about 5TB. Virtually it's 10 VMs, 7 of which are off all the time unless we fail over. I know there are companies that have a VMware infrastructure to which we can connect and replicate to using Veeam, but for whatever reason I can't figure out the Google terms to use to find these people. I think vCloud director gives the ability to bill based on usage, meaning in our situation, our bills would be relatively low until a failover event. This does exist, right? Anyone have thoughts on the pros and cons of this vs. colocation? Is colocation definitely going to be cheaper?
|
# ? Jul 5, 2012 19:24 |
|
Microsoft has changed up the licensing model a bit in Server 2012. http://arstechnica.com/information-technology/2012/07/windows-server-2012-licensing-reworked-for-the-cloud/ Basically, the only difference between Standard and Datacenter edition is virtualization rights. 1 standard edition license covers two sockets in a server and gives 2 virtualization rights beyond the host OS. At $882 for a Standard license, I think this is poised to give virtualization to the masses. You could configure a pretty modest two socket Dell R420 with OS license for around $4500 easily. For a small business, that would replace two older machines (3 if you wanted to have the host OS perform any roles, I didn't seen any restrictions mentioned for the host OS unlike standard now). You can also stack the Standard edition licenses so you can basically add two servers for $900 a pop if the hardware had enough headroom for more VMs. bull3964 fucked around with this message at 01:27 on Jul 6, 2012 |
# ? Jul 6, 2012 01:24 |
|
The VM's can only be run under Hyper-V or?
|
# ? Jul 6, 2012 02:04 |
|
evil_bunnY posted:The VM's can only be run under Hyper-V or? Well as long as you nesting it the upper OS shouldn't care/know
|
# ? Jul 6, 2012 02:07 |
|
The biggest news seems to be, as the article pointed out, that Microsoft seems to be using this product cycle as an opportunity to move their SBS users onto a subscription revenue model.
|
# ? Jul 6, 2012 02:16 |
|
Misogynist posted:The biggest news seems to be, as the article pointed out, that Microsoft seems to be using this product cycle as an opportunity to move their SBS users onto a subscription revenue model. Haven't most switched to SBS 2011? seriously that package is great if under 50ish users
|
# ? Jul 6, 2012 02:31 |
|
Corvettefisher posted:Well as long as you nesting it the upper OS shouldn't care/know Its less a technology question and more a licensing terms question. That said the licensing should universally apply regardless of the actual hypervisor in use (unlike... say Oracle...) If you're already on datacenter today then it looks like there's a bit of a price change, otherwise the terms sound basically the same. <5k to run as many windows VMs as you want on a 2 socket box.
|
# ? Jul 6, 2012 02:33 |
|
Misogynist posted:The biggest news seems to be, as the article pointed out, that Microsoft seems to be using this product cycle as an opportunity to move their SBS users onto a subscription revenue model. This will bite them in the rear end. IT people who get hustled into low cost consulting love pushing SBS. Because hey, its easy to setup? This would of been the cycle to mix and match the solutions. Want SQL azure but exchange locally? Want office 365 and SQL locally?
|
# ? Jul 6, 2012 06:51 |
|
incoherent posted:This will bite them in the rear end. IT people who get hustled into low cost consulting love pushing SBS. Because hey, its easy to setup? Not only is it easy to set up, but no CAL headache for file services.
|
# ? Jul 6, 2012 15:41 |
|
Corvettefisher posted:Normally I wouldn't see someone installing vcenter server on a desktop OS for a production environment. My mistake Sorry for the late response (I actually got some days off work this week!). Here is the setup that I am working with out there. - 5 Dell PE 2950 - One configured as a "physical master workstation" with Win 7 (per my bosses requirements) - One running 2008 R2 Standard as a physical DC (per my bosses requirements) - 3 ESXi hosts licensed for Essentials Plus My boss really has no real knowledge on virtualization, but doesn't fully trust it, so he really wants the vCenter server physical (running on the master workstation). After I let him know it wasn't supported for that OS, he requested I install it on the DC out there (can't do that either, AD and vCenter don't like eachother on the same install). Trust me, this is all as stupid as it sounds.
|
# ? Jul 6, 2012 17:56 |
|
Moey posted:Sorry for the late response (I actually got some days off work this week!). Your boss should be in charge of moving cubicles and you should take his job it seems.
|
# ? Jul 6, 2012 18:05 |
|
Nebulis01 posted:Your boss should be in charge of moving cubicles and you should take his job it seems. He has trouble moving himself, I think he would have a small stroke if he had to move any equipment. He is out today so I am just building a 2008 R2 VM out there for vCenter. Problem solved. Until he comes in on Monday and has me blow it away and re-build the physical "master workstation" with Server 2008 R2, which means I get to drive out to the Colo again
|
# ? Jul 6, 2012 18:35 |
|
Moey posted:Sorry for the late response (I actually got some days off work this week!). It is possible to install VCenter on a DC. I have done it, and it is unsupported, you just need to change a port number. This thread will explain it: http://communities.vmware.com/thread/213899 But you are right, it is stupid and you shouldn't need to do that. Hopefully you can keep your vCenter VM.
|
# ? Jul 9, 2012 17:37 |
|
I have a 2008 VM that runs a java-based application, and a while ago the jvm poo poo itself. It restarted itself, but I opened a support case with the vendor to figure out what happened. They say that based on the logs an dump info I sent them, it appears to be a memory problem and would like me to do a memory test on the server. Am I correct in thinking that there won't really be any value in doing this on a VM? Since it's just being allocated some chunk of memory on the esx host, couldn't a potentially bad memory area be allocated to some other server right now?
|
# ? Jul 9, 2012 18:04 |
|
stubblyhead posted:I have a 2008 VM that runs a java-based application, and a while ago the jvm poo poo itself. It restarted itself, but I opened a support case with the vendor to figure out what happened. They say that based on the logs an dump info I sent them, it appears to be a memory problem and would like me to do a memory test on the server. Am I correct in thinking that there won't really be any value in doing this on a VM? Since it's just being allocated some chunk of memory on the esx host, couldn't a potentially bad memory area be allocated to some other server right now? Did it poo poo itself weekend before last? I'm going to assume it's a leap second issue.
|
# ? Jul 9, 2012 18:34 |
|
So is there a reason to choose NFS datastores over something block based? It can be as good as block as far as I can tell, but it seems like it just started as an afterthought and snowballed from there, and when starting from scratch there's no reason to choose NFS if you have iSCSI or FC.
|
# ? Jul 9, 2012 18:36 |
|
^^^I doubt it was the leap secondstubblyhead posted:I have a 2008 VM that runs a java-based application, and a while ago the jvm poo poo itself. It restarted itself, but I opened a support case with the vendor to figure out what happened. They say that based on the logs an dump info I sent them, it appears to be a memory problem and would like me to do a memory test on the server. Am I correct in thinking that there won't really be any value in doing this on a VM? Since it's just being allocated some chunk of memory on the esx host, couldn't a potentially bad memory area be allocated to some other server right now? It is possible for it to be memory, but pretty unlikely. My guess is it crashed for some reason, they aren't sure, and blamed it on memory. Unless it was a memory full then dump, any other VMs having problems? FISHMANPET posted:So is there a reason to choose NFS datastores over something block based? It can be as good as block as far as I can tell, but it seems like it just started as an afterthought and snowballed from there, and when starting from scratch there's no reason to choose NFS if you have iSCSI or FC. Dilbert As FUCK fucked around with this message at 18:47 on Jul 9, 2012 |
# ? Jul 9, 2012 18:43 |
|
Corvettefisher posted:I go with iscsi personally, I tend to see lower latency. NFS is a bit easier to setup, and some backup software will want NFS if it can't read VMFS. Yeah, while NFS is pretty simple to get going I wouldn't recommend it over iSCSI. We have several large farms running via NFS off a NetApp and they just end up destroying the NetApp's CPU utilization from all the overhead. It was maxed out before we re-aligned all of our old Windows VMs and now hovers at like 70%.
|
# ? Jul 9, 2012 18:52 |
|
Corvettefisher posted:^^^I doubt it was the leap second No, this was about two months ago so not a leap second thing. It wasn't an out of memory error. The actual error code was EXCEPTION_ACCESS_VIOLATION. If this is a bad memory error (I am almost certain it isn't), wouldn't it take a memory test on the esx host itself to detect?
|
# ? Jul 9, 2012 19:23 |
|
Reading through Scott Lowe's vSphere 5 the end of the Storage chapter says "Why would you use each of the 3 kinds of datastores (iSCSI, FC, NFS)" and the only reason I can think to choose NFS is you don't have block available, or someone is holding a gun to your head and forcing you to for reasons you have no control over.
|
# ? Jul 9, 2012 19:27 |
|
There are a bunch more than that. Simplifying provisioning and existing familiarity with NFS security/performance are two.
|
# ? Jul 9, 2012 19:46 |
|
NFS is really nice with VMware View since you can put up to around 250 VMs per datastore with NFS, but the maximum with VMFS is 140. Also, you can have 32 hosts per VDI cluster with NFS since there isn't the 8 host max per replica limitation with NFS.
|
# ? Jul 9, 2012 19:53 |
|
FISHMANPET posted:Reading through Scott Lowe's vSphere 5 the end of the Storage chapter says "Why would you use each of the 3 kinds of datastores (iSCSI, FC, NFS)" and the only reason I can think to choose NFS is you don't have block available, or someone is holding a gun to your head and forcing you to for reasons you have no control over. Another reason is low end storage, VNXe's by EMC only do dedupe, thin provisioning, and replication with NFS shares, they will not do it ISCSI. I know some people also poor man storage HA doing windows DFS=>NFS, which...works. And if you have NFS already up it is easy to just plop it on there
|
# ? Jul 9, 2012 20:58 |
|
Corvettefisher posted:I know some people also poor man storage HA doing windows DFS=>NFS, which...works.
|
# ? Jul 9, 2012 21:22 |
|
Erwin posted:Just because something works doesn't mean you should do it. I wouldn't count this as a reason to use NFS. What an awful idea. I wasn't meaning you should do it and it is an acceptable practice more or less just stating I know some people use NFS+DFS for that reason.
|
# ? Jul 9, 2012 21:24 |
|
stubblyhead posted:No, this was about two months ago so not a leap second thing. It wasn't an out of memory error. The actual error code was EXCEPTION_ACCESS_VIOLATION. If this is a bad memory error (I am almost certain it isn't), wouldn't it take a memory test on the esx host itself to detect? You know how DMA works? Basically a memory tester will read/write to all addresses. Real hardware will just ignore it usually, but virtual hardware will programmatically care about what the hell you're doing to various addresses. E.g. writing/reading data to the SCSI controller or NIC card's mapped addresses. The VM Monitor will sometimes catch it and crash the VM. Depends on what kind of test and data is being written, and where, for that VM. Sane operation of hardware in an OS with drivers will not let weird things be written to hardware addresses, really. Anyway as far as finding bad memory goes, it's indeed somewhat viable to run it in a VM, in concept. There's still a real memory space given to the VM, and it still uses one or more real processors. But you're really still best off doing it on the metal and giving that testing software everything to play with. Anyway, I think they're making a bad call on it being memory. If you ran out of memory, then you had a memory leak somewhere in the Guest or it's just starved due to an unprecedented workload. Since it's a Windows box, you'll have to check task manager and performance monitor occasionally to see if the application (or something else) is consuming way too much memory. Kachunkachunk fucked around with this message at 22:28 on Jul 9, 2012 |
# ? Jul 9, 2012 22:25 |
|
|
# ? May 8, 2024 07:10 |
|
Kachunkachunk posted:Anyway, I think they're making a bad call on it being memory. If you ran out of memory, then you had a memory leak somewhere in the Guest or it's just starved due to an unprecedented workload. Since it's a Windows box, you'll have to check task manager and performance monitor occasionally to see if the application (or something else) is consuming way too much memory. Yeah, agreed. I've seen this happen literally once in about five years of using this software extensively, so it's an edge case no matter how you slice it. I did a little more research, and it sounds like the jvm version it uses has some known issues with some of the garbage collectors. I've let them know as much, and I haven't heard anything back yet. Thanks all for backing up my suspicion that a memory test is probably not the best course of action here.
|
# ? Jul 9, 2012 22:47 |