Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ragzilla
Sep 9, 2005
don't ask me, i only work here


Kalenden posted:

The second. Preferably, devices not on the same network should be able to reach the VM.

Get a VPS and tunnel out from the VM to the VPS, then set up NAT. This seems like a pretty odd request and could cause some problems with security policies at some places, what's the application?

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

Kalenden posted:

The second. Preferably, devices not on the same network should be able to reach the VM.

A static IPv6 address will not cover this unless you also know a fair amount about where it's going to be powered on and whether or not they have native ipv6 or what their firewalls look like, etc.

If you want to do this, your best bet is to set it up at the end of a 6to4 gateway (hurricane electric or other), with an rc.local script which updates the HE endpoint with the public IPv4 address of wherever it's running, then establishes the tunnel.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Wicaeed posted:

What is everyone's thoughts on Pernix FVP and other software solutions that use local Flash/SSD storage to cache read/writes to improve VM I/O?

Right now I'm looking at a very small deployment of vSphere (3 hosts) for our first production environment. The only SAN storage we really have available to us is an Equallogic PS4110 w/7.2kRPM NL-SAS drives.

If I wanted to ensure a reasonable expectation of performance for our VMs, would such a solution be worth pursuing?

I just went through their dog and pony show and I am kicking myself for not knowing about this 3 years ago when we started seeing storage latency issues and my boss wasn't forking over the money for NetApp upgrades. The tech looks good, one of the major VDI hosts in the region uses it on everything and they're doing absurd things like backing 400 VDI seats with nothing more 8 7.2k disks and 750gb pcie ssd's just to prove they can do it.

FYI Equallogic has their own similar host-side caching tech that they use and I can't remember the name of it but you should probably talk to someone from Dell because it sounded platform agnostic instead of VMware-only like Pernix. Might not work with that specific model though.

My big warning with host caching tech is that its only going to help with "normal" workloads. A backup job that touches every single block on the array is going to have a high cache miss rate and bring you to your knees. Make sure you are using a snapshot-based backup system like Veem/AppAssure instead of full dumps for backup or you could still be in trouble.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

BangersInMyKnickers posted:

My big warning with host caching tech is that its only going to help with "normal" workloads.

It's not just normal workloads, it's random read workloads, specifically. Most host based flash will only cache reads, and only random reads. Some, like Pernix, can cache writes as well, but write caching comes with some tradeoffs and is generally less beneficial than read caching because the data must destage to HDD eventually, so it doesn't remove load from the backing storage, it just smooths it some.

It's still useful for writes in the sense that the IO it removes from the disks that would normally be used to service random reads can now be used to process write IO, but it's still important to know what your workloads are to determine what sort of benefits you will see.

Stuff like VDI can be a lot more write intensive than people think, so it's important to make sure that you've got enough IO in your stable storage to support that write activity, irrespective of what you're doing with host caching.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Our biggest issues with cache based arrays have generally been high numbers of small writes. On our sun arrays, we run around 400 vdi machines and we simply had too many writes at peak times. Reads were almost always satisfied by cache, like 99.7% of the time roughly.

Wicaeed
Feb 8, 2005

BangersInMyKnickers posted:

I just went through their dog and pony show and I am kicking myself for not knowing about this 3 years ago when we started seeing storage latency issues and my boss wasn't forking over the money for NetApp upgrades. The tech looks good, one of the major VDI hosts in the region uses it on everything and they're doing absurd things like backing 400 VDI seats with nothing more 8 7.2k disks and 750gb pcie ssd's just to prove they can do it.

FYI Equallogic has their own similar host-side caching tech that they use and I can't remember the name of it but you should probably talk to someone from Dell because it sounded platform agnostic instead of VMware-only like Pernix. Might not work with that specific model though.

My big warning with host caching tech is that its only going to help with "normal" workloads. A backup job that touches every single block on the array is going to have a high cache miss rate and bring you to your knees. Make sure you are using a snapshot-based backup system like Veem/AppAssure instead of full dumps for backup or you could still be in trouble.

Dude I posted that almost a year ago.

I thought someone hacked my account since I couldn't remember making that post :v:

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Yeah, I know but there hasn't been much discussion about it. Depending on your workload it can be a huge benefit and VMware's host-side caching is garbage in comparison so might as well throw it back out there.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


Is there a way I can download a evaluation version of VMware say 5.0 and 5.5?

I'm trying to teach myself how to do an upgrade but only evaluation options available appears to be 6.0.

dox
Mar 4, 2006
Looking for some home lab advice-

I'm currently running a Fractal R5/ASUS H97M/i5-4590S setup that has done fine for single host ESX labbing but my coworker wants to buy it off me and I have the opportunity to improve.

I'm having a tough time deciding whether I should just go for a dual Intel NUC setup or a 1U build. I want to set something up that I can eventually study/lab with for the VCP. I'll likely start with one host and then go to two. I'm eventually going to want to have a rack for my 4U NAS build, but don't have one yet.

Intel NUC Build

quote:

Intel NUC NUC5i5RYH - $390 x2
8GB SODIMM - $56 x4
= $1004 ($502 each)

1U Build

quote:

SUPERMICRO SYS-5018D-MF 1U (MBD-X10SLL-F & PSU) - $273 openbox
Intel Xeon E3-1220V3 - $205
Memory: 4x8GB = 32GB - own
= $478

Other?

quote:

SUPERMICRO MBD-X10SLL-F-O uATX - $168
Intel Xeon E3-1220V3 - $205
Fractal R5/other smaller case or 1U rack? - $110
SeaSonic M12II 520 Bronze 520W - $70
Memory: 4x8GB - own
= $553

Any words of advice/caution? I'm leaning heavily towards the 1U build- good form factor, noise is not bad, plus added bonus of IPMI and a Xeon.

edit: and on a completely unrelated side note, does anyone know if the Intel Quad Port PT NIC (EXPI9404PTBLK) is supported in ESXi?

dox fucked around with this message at 00:31 on Mar 22, 2015

Hadlock
Nov 9, 2004

Is this for your house? I was extremely concerned about fan noise, even though it lives in the closet under the stairs.

I ended up buying this 4U rackmount case:
http://www.newegg.com/Product/Product.aspx?Item=N82E16811147164

And have been extremely happy with it. It holds 15 full size hard drives, holds standard desktop-sized components, and more importantly, airflow is done with 120mm fans, which keeps fan noise extremely low. In the grand scheme of things 1U vs 4U doesn't really matter unless you're leasing physical space in a datacenter somewhere.

I have one of the VMs crunch data for Folding@Home and World Community Grid when it's not in use so it generally is at 85% utilization most of the time and things stay nice and cool generally (thanks, Haswell!)



That said, the i7 NUC looks like a pretty good option. It's not fanless though, and you're limited to about 1TB total space with modern SSD prices.

dox
Mar 4, 2006

Hadlock posted:

Is this for your house? I was extremely concerned about fan noise, even though it lives in the closet under the stairs.

Not even a full house- for an apartment. Noise is definitely a concern, but based off this video it doesn't seem to be that loud. You're right though, I'm probably not headed in a good direction sound-wise going 1U.

thebigcow
Jan 3, 2001

Bully!
1U equipment is loud. If you want to build a rackmount go ahead but stick to 4U so you can use normal fans and things. Those NUCs cost more but you'll have something interesting to use when you are done with your lab.

That NIC isn't on the HCL http://www.vmware.com/resources/compatibility/search.php?deviceCategory=io and the chipset on it comes up as a dual port gigabit Dell NIC. I'd probably pass.

Look up the NIC on http://ark.intel.com to get the chipset and then look for that on the HCL before you buy anything.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I don't see why you can't lab on a single host and just nest you guest hosts. Go physically small.

Internet Explorer
Jun 1, 2005





Moey posted:

I don't see why you can't lab on a single host and just nest you guest hosts. Go physically small.

Yeah, I can't imagine using more than one server/PC for your home lab. It's really not necessary these days.

Hadlock
Nov 9, 2004

I was looking at buying a fanless atom 1U (with VT-X support) to practice doing failovers between nodes. I'm not sure how you simulate high avalibility without a second physical machine. About once a month a used Supermicro 1U with the D525 atom chip comes up on Ebay for about $150 shipped.

Internet Explorer
Jun 1, 2005





It depends, high availability what? If you're just trying out HA in ESXi, use nested ESXi.

Hadlock
Nov 9, 2004

Glad I asked! Thanks! Right now I'm running Hyper-V for some bone-headed reason, I really should scrap the whole thing and switch over to ESXi during the next ice storm.

Internet Explorer
Jun 1, 2005





Nice! Good luck with your setup. You should post a trip report in the thread. I'm sure people would find it helpful.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Tab8715 posted:

Is there a way I can download a evaluation version of VMware say 5.0 and 5.5?

I'm trying to teach myself how to do an upgrade but only evaluation options available appears to be 6.0.

If you get an installer for any of the versions it should throw you in to a 60 day eval mode by default.

TeMpLaR
Jan 13, 2001

"Not A Crook"
Anyone install vSphere 6 on anything? Debating trying it in dev or just waiting for some more patches to come out first.

TeMpLaR fucked around with this message at 20:10 on Mar 23, 2015

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

TeMpLaR posted:

Anyone install vSphere 6 on anything? Debating trying it in dev or just waiting for some more patches to come out first.

Have it running on one of our lab hosts right now, actually. We'll probably stay with 5.5 for the time being as we don't really have any reason for the 6.0 changes. I like some of the changes though (centralized ISOs are sweet)

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We're chomping at the bit for vGPU, since we have a 20 host GPU backed Citrix cluster.

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?

FISHMANPET posted:

We're chomping at the bit for vGPU, since we have a 20 host GPU backed Citrix cluster.

I haven't had time to play with it and a quick Google search doesn't answer my question... Does vGPU allow you to live vMotion VMs to a different host?

If so, I might be making 6.0 a priority as well. vDGA is such a management pain for us, mostly centered around having to coordinate downtime with our GPU customers...

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I'm not 100% sure it does but I'm like 99% sure, since our virt guy was talking about and how it meant we no longer had to pin our GPU backed VMs to the specific host.

evol262
Nov 30, 2010
#!/usr/bin/perl

FISHMANPET posted:

I'm not 100% sure it does but I'm like 99% sure, since our virt guy was talking about and how it meant we no longer had to pin our GPU backed VMs to the specific host.

You'll still have to set policies to prevent migration to hosts without vGPU unless you're ok with falling back to software rendering (and that may not work either, I haven't tried it), but you're not pinned to specific hosts anymore since it's abstracted out a bit more.

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?

FISHMANPET posted:

I'm not 100% sure it does but I'm like 99% sure, since our virt guy was talking about and how it meant we no longer had to pin our GPU backed VMs to the specific host.

I asked our BCS rep, and he confirmed that live vMotion should be a reality with vGPU. Now to figure out where that sits with vRA, NSX, and all the other projects we are in the process of evaluating/implementing... :D

evol262 posted:

You'll still have to set policies to prevent migration to hosts without vGPU unless you're ok with falling back to software rendering (and that may not work either, I haven't tried it), but you're not pinned to specific hosts anymore since it's abstracted out a bit more.

Yeah, we already separate the GPU nodes into their own cluster because they are running on rackmounts instead of the blades everything else is. With vDGA we have been trying to pound into the heads of our Citrix team that they need to handle N+1 inside the application and make sure that their images are spread across the available nodes to handle one of them going down. It will be a whole lot easier if vGPU works out like vSGA does, only with better performance.

DevNull
Apr 4, 2007

And sometimes is seen a strange spot in the sky
A human being that was given to fly

Fancy_Lad posted:

I asked our BCS rep, and he confirmed that live vMotion should be a reality with vGPU.

No, vMotion doesn't work with vGPU (GRID) it only works with vSGA.

p.s. I hate the drat marketing names that VMware uses for all this crap. The pass-through/mediated pass-through stuff can't vMotion. Only the vSGA (virtualized by VMware) stuff can be vMotioned. Oh, and the virtualized graphics was called vgpu years ago before marketing decided to make it a thing. Not confusing at all.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

I was really pumped for NFS 4.1 support to get multipath IO but then I found out VAAI isn't supported yet, so I'll probably just wait until 6.5 or one of the update levels to come out.

evol262
Nov 30, 2010
#!/usr/bin/perl

DevNull posted:

No, vMotion doesn't work with vGPU (GRID) it only works with vSGA.

p.s. I hate the drat marketing names that VMware uses for all this crap. The pass-through/mediated pass-through stuff can't vMotion. Only the vSGA (virtualized by VMware) stuff can be vMotioned. Oh, and the virtualized graphics was called vgpu years ago before marketing decided to make it a thing. Not confusing at all.

Ok, so now vGPU is split out, passed-through GRID like SR-IOV? Or is that vDGA? I thought vGPU was abstracting access to the actual GRID GPUs through the hypervisor, with a native nvidia/amd driver living in vmkernel, and it wasn't passthrough like vDGA. Why can't that be migrated?

And vSGA uses an older guest-level driver VMware driver instead of the native nvidia/amd driver? What the hell is software 3D called now? Super confusing.

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?

DevNull posted:

No, vMotion doesn't work with vGPU (GRID) it only works with vSGA.

evol262 posted:

Ok, so now vGPU is split out, passed-through GRID like SR-IOV? Or is that vDGA? I thought vGPU was abstracting access to the actual GRID GPUs through the hypervisor, with a native nvidia/amd driver living in vmkernel, and it wasn't passthrough like vDGA. Why can't that be migrated?

FWIW, before vSphere 6 was released, I heard conflicting reports from two different VMware reps on if this was/will be possible... And one vendor expert said it was in the works. My impression was vGPU vMotion would either be a release feature with 6 or a 6u1/6.1 release.

Ugh. I'm going to have to find some hardware and lab this poo poo out, aren't I?


DevNull posted:

p.s. I hate the drat marketing names that VMware uses for all this crap. The pass-through/mediated pass-through stuff can't vMotion. Only the vSGA (virtualized by VMware) stuff can be vMotioned. Oh, and the virtualized graphics was called vgpu years ago before marketing decided to make it a thing. Not confusing at all.

evol262 posted:

And vSGA uses an older guest-level driver VMware driver instead of the native nvidia/amd driver? What the hell is software 3D called now? Super confusing.

Story of my goddamn life ever since this "small" project was thought up.

evol262
Nov 30, 2010
#!/usr/bin/perl
Not explicitly VMware related in the VMware thread, but this may interest those of you who want to know how migration works, and I'd be surprised if vMotion did things really differently. Their last techpost on it is 5 years old, but very similar conceptually.

DevNull
Apr 4, 2007

And sometimes is seen a strange spot in the sky
A human being that was given to fly

evol262 posted:

Ok, so now vGPU is split out, passed-through GRID like SR-IOV? Or is that vDGA? I thought vGPU was abstracting access to the actual GRID GPUs through the hypervisor, with a native nvidia/amd driver living in vmkernel, and it wasn't passthrough like vDGA. Why can't that be migrated?

And vSGA uses an older guest-level driver VMware driver instead of the native nvidia/amd driver? What the hell is software 3D called now? Super confusing.

vGPU isn't passthough, but it does memory access without hypervisor knowledge. It gets a chunk of system memory that the GRID card can do whatever it wants with, and the hypervisor is never aware of what it is doing. It could theoretically be made to work with vMotion, but Nvidia's driver doesn't currently support it.

vSGA does use the VMware driver in the guest. That can be backed by the software rendered or hardware. It can be vMotioned.

Wicaeed
Feb 8, 2005
Anyone played around with Sexilog yet? Seems like a pretty decent free replacement for Log Insight using a lightweight ELK installation

http://www.sexilog.fr/

CtrlMagicDel
Nov 11, 2011

TeMpLaR posted:

Anyone install vSphere 6 on anything? Debating trying it in dev or just waiting for some more patches to come out first.

Got it running my home lab on VMware Workstation. Install (Windows vCenter) seemed a lot easier but haven't played much with more advanced features or changing certs around, which was one of the big pain points at least in 5.1 where work is at.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

Wicaeed posted:

Anyone played around with Sexilog yet? Seems like a pretty decent free replacement for Log Insight using a lightweight ELK installation

http://www.sexilog.fr/

So this is an all in one ELK? Looks pretty good actually.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy
I was posting before about wanting to use two identical Proliants as a HA and/or FT VM server setup.

So far I've got the servers with 12 empty 3.5" drives in front and 2 in back. I got some adapter brackets for the rear bays so I can slot 2.5" SSD's in there.

Someone here said I should tie them together with a single NAS for shared storage, but that kinda creeps me out because of the single point of failure.

Would it make any sense to:

1) Toss two SSD's in the back bays, maybe put them in a Raid 1, and use that to put the VMs on? Modern SSD's don't get messed up in RAIDs like the old models did, correct?

2) Toss 12 platters in the front bays, put them in a Raid 1+0, and use it for storage?

Is that a particularly bad or stupid design? My rack space is limited and I'm trying to keep things simple. I don't get the deal with inserting a NAS as shared storage when the servers have all this expandability is the way to go.

Wicaeed
Feb 8, 2005

Zero VGS posted:

I was posting before about wanting to use two identical Proliants as a HA and/or FT VM server setup.

So far I've got the servers with 12 empty 3.5" drives in front and 2 in back. I got some adapter brackets for the rear bays so I can slot 2.5" SSD's in there.

Someone here said I should tie them together with a single NAS for shared storage, but that kinda creeps me out because of the single point of failure.

Would it make any sense to:

1) Toss two SSD's in the back bays, maybe put them in a Raid 1, and use that to put the VMs on? Modern SSD's don't get messed up in RAIDs like the old models did, correct?

2) Toss 12 platters in the front bays, put them in a Raid 1+0, and use it for storage?

Is that a particularly bad or stupid design? My rack space is limited and I'm trying to keep things simple. I don't get the deal with inserting a NAS as shared storage when the servers have all this expandability is the way to go.

Not so much bad/stupid as completely won't work with the VMware FT/HA features since both rely on having shared storage.

IMO you may need to go back and make sure you understand what a single point of failure is as running a VM on a host using its own local storage is also a single point of failure.

In 90% of use cases a simple NAS/SAN is going to be the easiest way to set up a shared storage device to meet those requirements for FT/HA. There do exist solutions that can pool together the local storage of many different servers into a single (or multiple) pools, but that's something that is completely out of the scope of what you're trying to accomplish.

Wicaeed fucked around with this message at 01:26 on Mar 25, 2015

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

Zero VGS posted:

I was posting before about wanting to use two identical Proliants as a HA and/or FT VM server setup.

So far I've got the servers with 12 empty 3.5" drives in front and 2 in back. I got some adapter brackets for the rear bays so I can slot 2.5" SSD's in there.

Someone here said I should tie them together with a single NAS for shared storage, but that kinda creeps me out because of the single point of failure.

Would it make any sense to:

1) Toss two SSD's in the back bays, maybe put them in a Raid 1, and use that to put the VMs on? Modern SSD's don't get messed up in RAIDs like the old models did, correct?

2) Toss 12 platters in the front bays, put them in a Raid 1+0, and use it for storage?

Is that a particularly bad or stupid design? My rack space is limited and I'm trying to keep things simple. I don't get the deal with inserting a NAS as shared storage when the servers have all this expandability is the way to go.

Traditionally your single point of a failure is a SAN device with dual controllers/redundant everything.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Anyone have any experience with "normal" vmotion over a WAN? I'm looking into implementing fiber between our Seattle and Portland addresses, and I'm wondering if I'll be able to theoretically storage and host vmotion servers. Looks like latency under 10ms and 622Mbps is required. Can anyone confirm?

http://blogs.ixiacom.com/ixia-blog/what-are-the-key-requirements-to-support-vmotion-across-data-center-sites/

Adbot
ADBOT LOVES YOU

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

goobernoodles posted:

Anyone have any experience with "normal" vmotion over a WAN? I'm looking into implementing fiber between our Seattle and Portland addresses, and I'm wondering if I'll be able to theoretically storage and host vmotion servers. Looks like latency under 10ms and 622Mbps is required. Can anyone confirm?

http://blogs.ixiacom.com/ixia-blog/what-are-the-key-requirements-to-support-vmotion-across-data-center-sites/

There's a lot of stuff to take into consideration here.

For example, storage. If you're not using something like EMC VPLEX then that means the VM is going to be accessing storage back in the original site and all of that traffic is going over your WAN. You could attempt to storage vmotion it to a local datastore at the remote site but again that's going to chew up a lot of bandwidth.

On the network side, not only do you need to make sure you've stretched the VLAN (using OTV, VXLAN or some other means) you need to think about how traffic flows to and from the VM. For example, if the default gateway is back in the other site and I'm trying to route to another subnet, that traffic has to go back to Seattle over the WAN link. You basically end up with this traffic tromboning effect. OTV (and VXLAN via NSX) can address some of this by making sure there's always a "local" default gateway no matter where the VM lives. You still need to consider who's talking to the VM or if the VM has traffic coming into it from the internet. For example a web server behind an F5 load balancer in Seattle that gets VMotioned over to Portland is still going to be accessed via Seattle (via your data center interconnect). A lot of these problems can be addressed with things like LISP but that is still pretty new.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply