Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
luminalflux
May 27, 2005



Interesting. I only found benchmarks for Windows unfortunately, doesn't look like anyone has hard numbers for vmxnet3 vs e1000 for Linux. I just checked, most of my stuff seems to be running on e1000 with a few newer VMs running vmxnet3, and I haven't seen the issues with vmxnet3 on Linux the forums seem to be crying about.

And yeah, e1000 is the tulip/lance of 2010's. (I had to get separate 4x1G E1000 since OpenBSD supports that but not the Broadcoms that HP puts in their new servers)

Edit:

Misogynist posted:

One other neat thing about VMXNET2/3 is that they use shared memory for network communication with the hypervisor, so if you have two VMs on the same host communicating over the same portgroup they can throw traffic at each other as fast as they can write to memory. Depending on how you can collocate your VMs, you can really kick up the responsiveness of a few applications.

brb converting ALL the VMs to vmxnet3

(seriously a large chunk of my what my webapp servers do is network chatter)

Adbot
ADBOT LOVES YOU

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Misogynist posted:

One other neat thing about VMXNET2/3 is that they use shared memory for network communication with the hypervisor, so if you have two VMs on the same host communicating over the same portgroup they can throw traffic at each other as fast as they can write to memory. Depending on how you can collocate your VMs, you can really kick up the responsiveness of a few applications.

Yeah kinda figured this was one of the general knowledge things, but if you don't this is a great reason to.

luminalflux posted:


brb converting ALL the VMs to vmxnet3

(seriously a large chunk of my what my webapp servers do is network chatter)
Don't forget to set proper vm affinity, if using DRS.

movax
Aug 30, 2008

IOwnCalculus posted:

Because I'm a sucker for passing controllers through instead of individual drives - can you stick a second HBA in there? I picked up a couple of HP-branded LSI 3041 SAS controllers for under $25 each shipped on eBay. Thus, you get to pass your 6Gbps ports straight to a VM, and your 4x 2TB drives can sit on one controller together which gets passed straight to another VM.

Ooh, those are cheap, something to consider. What's the OS support like? I guess I'd have to buy a SFF<->4x SATA cable for those.

Corvettefisher posted:

I'm still wondering what draws you to do the pass-through on the server. Given the number of users you have, systems running, and such I don't really think the gain will be substantial unless you plan to see how long it takes till the controller gives up, or in simulations you are seeing where the hypervisor layer + VMFS is a deal breaker.

Onboard passthrough can be a bit iffy from some of my labs where I have attempted and failed on it.

Generally for the best networking you'll want to use the VMXNET3 when possible, it offers many performance improvements over the E1000

My POV is hardware engineering (I design server mobos/systems/BIOSes) so to me it makes more logical sense to just pass through the entire PCI device to a guest OS rather than eat any overhead from the hypervisor. Are you saying that it isn't really needed and using VMFS ends up OK?

And the VMXnet3 comments make it sound like I want those as my virtual network adapters instead of the emulated Intel/e1000e kernel module?

IOwnCalculus
Apr 2, 2003





movax posted:

Ooh, those are cheap, something to consider. What's the OS support like? I guess I'd have to buy a SFF<->4x SATA cable for those.

That's the best part when compared to the Packrat-favorite IBM M1015 - this thing has standard SATA ports natively, so no special cables needed. It's strictly an HBA so there's no non-RAID firmware to flash, and the only real downside it has compared to the M1015 is that it only supports four drives instead of eight - if you need the drives per slot density.

The chipset itself is actually an LSI1064E, which seems to be well supported. Nexenta picked it up right away, as did a Windows 7 guest when I passed one through to it in order to run Seatools.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Corvettefisher posted:

Yeah kinda figured this was one of the general knowledge things, but if you don't this is a great reason to.

Don't forget to set proper vm affinity, if using DRS.

Wouldn't just bundling them in to vApps give you the same results without selecting specific hardware and making the load-balancing more difficult and manual?

movax posted:

Ooh, those are cheap, something to consider. What's the OS support like? I guess I'd have to buy a SFF<->4x SATA cable for those.


My POV is hardware engineering (I design server mobos/systems/BIOSes) so to me it makes more logical sense to just pass through the entire PCI device to a guest OS rather than eat any overhead from the hypervisor. Are you saying that it isn't really needed and using VMFS ends up OK?

And the VMXnet3 comments make it sound like I want those as my virtual network adapters instead of the emulated Intel/e1000e kernel module?

Yes, absolutely. VMXnet3 every single time unless something is so horribly broken that you can't do it.

As for the overhead, there are some VMware whitepapers regarding the VMFS overhead compared to direct block-level access and it was maybe 1-3% when properly configured with a paravirtual controller on absolutely insane configurations. Unless you are doing something that is entirely I/O bottlenecked (which I doubt since you are dedicating single spindles to VMs instead of pools) then the overhead likely won't even be detectable in your case. You'll just need to make sure you get the shares set up right so a single VM doesn't crowd out others with IOP requests since you no longer have that hard partitioning between spindles/VMs.

BangersInMyKnickers fucked around with this message at 01:17 on Jan 16, 2013

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

movax posted:


My POV is hardware engineering (I design server mobos/systems/BIOSes) so to me it makes more logical sense to just pass through the entire PCI device to a guest OS rather than eat any overhead from the hypervisor. Are you saying that it isn't really needed and using VMFS ends up OK?

And the VMXnet3 comments make it sound like I want those as my virtual network adapters instead of the emulated Intel/e1000e kernel module?

Yeah it will perform better, how much better will probably be small at best, you also then create a hardware dependency on that box, or device rather. Thus, when the hardware starts aging, and replacement time comes around, you are going to probably wish you could just click a few things over rather than what that hardware dependency will cause you.

I was looking for a article I saw yesterday but all I can find is this, which is RDM vs VMFS, as you can see the difference is negligible.

Personally I would stick with as few device dependencies as I have to, let the software handle it.

BangersInMyKnickers posted:

Wouldn't just bundling them in to vApps give you the same results without selecting specific hardware and making the load-balancing more difficult and manual?


I suppose you could, however a DRS rule is pretty easy to do. I suppose vapps would work as well.

Dilbert As FUCK fucked around with this message at 01:19 on Jan 16, 2013

movax
Aug 30, 2008

Hmm, I got the vSphere client installed and according to KB 1017530, I can't even use my local drives for RDMs. So what's the best way to get my 4x2TB friends exported to a guest OS for SW RAID purposes? 4 separate datastores?

I'm going to throw in a little 1TB drive as a place to drop the actual VMS on for testing in a bit.

e: I will also try to find a reliable noob guide to help me out here as well.

movax fucked around with this message at 01:42 on Jan 16, 2013

doomisland
Oct 5, 2004

Yo, are there any examples of a VMware hypervisor getting exploited from a guest? Or for that matter reasons it shouldn'tcan't happen.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

movax posted:

Hmm, I got the vSphere client installed and according to KB 1017530, I can't even use my local drives for RDMs. So what's the best way to get my 4x2TB friends exported to a guest OS for SW RAID purposes? 4 separate datastores?

I'm going to throw in a little 1TB drive as a place to drop the actual VMS on for testing in a bit.

e: I will also try to find a reliable noob guide to help me out here as well.

If you can't throw an entire controller at the VM you want to do raid and you can't do RDMs then yes, a datastore per disk is about the best I think you'll be able to do.

doomisland posted:

Yo, are there any examples of a VMware hypervisor getting exploited from a guest? Or for that matter reasons it shouldn'tcan't happen.

It shouldn't happen but my boss tells me of an exploit where a windows guest could be used to read the memory of other guests. Here's a list of vulnerabilities...

Goon Matchmaker fucked around with this message at 02:04 on Jan 16, 2013

doomisland
Oct 5, 2004

Goon Matchmaker posted:

If you can't throw an entire controller at the VM you want to do raid and you can't do RDMs then yes, a datastore per disk is about the best I think you'll be able to do.


It shouldn't happen but my boss tells me of an exploit where a windows guest could be used to read the memory of other guests. Here's a list of vulnerabilities...

Thanks for the link. Yeah we don't run Windows boxes on any VMs. I ask because we're probably gonna put a guest OS onto the internet without a condom :)

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

doomisland posted:

Thanks for the link. Yeah we don't run Windows boxes on any VMs. I ask because we're probably gonna put a guest OS onto the internet without a condom :)

Good luck.

movax
Aug 30, 2008

OK, doing some more Googling and reading up, I think I've got some poo poo figured out!

Network:
What CF said, tie WAN via vmnic0 to a vswitch Group, and bridge that through another vswitch (LAN) through a pfSense VM. For some reason my management kernel has currently defaulted to vmnic0, so I can only create the WAN group on vmnic1. I assume I can switch those back around to only use NIC 1/vmnic0 as the input to the pfSense box (Because I'm OCD)

Disks:
I can do local RDM through some semi-hackery, ick. Seeing as I need to get a HBA anyway though:
1x HBA of some type direct passthrough'd to OpenIndiana with disks behind it
1x ~80GB SSD holding a 'system' datastore where ESXi lives, ESXi logs live and say the pfSense install lives. Connected to motherboard SATA.
1x ~80GB SSD holding 2 VMDKs that hold OpenIndiana + CentOS (50/50 split) connected to mobo SATA3 port
1x 256GB SSD given entirely to application hosting CentOS (Note 1) Connected to mobo SATA3 port

Note 1: What's the best way to get to this to the OS that needs it? I can't raw passthrough the single disk without doing some weird stuff in console, use VMFS?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Corvettefisher posted:

HA is one way to protect a server from extended downtime. What am trying to convey is that you'll need to cluster SQL appropriately to provide application and service uptime; this coupled with an affinity rule to keep the VM's on separate hosts will do you good. If SQL is down SSO and most of vCenter will be pretty unresponsive (AKA not work).

Set up a new cluster and let them play with it, do not let them play with the production vCenter I don't care who says they are the poo poo at it. Run ESXi nested, vlan it, and set up vCenter on it. If you want to manage that you can install the root vCenter server with linked mode to manage the other vcenter from one console, but I would keep it completely separate. Our lab at the school I help out with does it this way, the ICM and VCAP courses have their own physically different servers running nested ESXi hosts for the students.

When you say cluster are you talking about setting up a Microsoft SQL cluster, or are you just talking about using HA to protect it? From what I remember from the Scott Lowe book, HA stuff gets stored on the hosts and not vCenter, so if the host with vCenter goes down, HA will still be able to restart the VMs, so there will just be the downtime while that VM restarts.

As for the instruction cluster, it's not VMware instruction, it's sysadmin and programming stuff, so they'll just be accessing the console, cd rom, and power settings of the VM. All the staff is in agreement that we should seperate that vCenter from production.

Except we're not in agreement that vCenter and SQL should be slplit, so we're building one giant VM for no reason in particular.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

FISHMANPET posted:

When you say cluster are you talking about setting up a Microsoft SQL cluster, or are you just talking about using HA to protect it? From what I remember from the Scott Lowe book, HA stuff gets stored on the hosts and not vCenter, so if the host with vCenter goes down, HA will still be able to restart the VMs, so there will just be the downtime while that VM restarts.

As for the instruction cluster, it's not VMware instruction, it's sysadmin and programming stuff, so they'll just be accessing the console, cd rom, and power settings of the VM. All the staff is in agreement that we should seperate that vCenter from production.

Except we're not in agreement that vCenter and SQL should be slplit, so we're building one giant VM for no reason in particular.

SQL plays a large part in how vCenter works, without SQL vCenter for the most part will not work. HA is one way to protect an SQL server, just keep in mind things that can hinder a VM from recovering properly, which is why you might want 2 SQL servers clustered via SQL clustering method. HA does not require vCenter, correct however be familiar with what does require it.

Preferably I would say put SQL server on one, and vCenter another. This will really become noticeable during vCenter upgrades, maintenance, and guest patching. Installing it all on one probably would work, however the little gotcha's you don't realize until after you do it can hurt you in the long run.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

BangersInMyKnickers posted:

We started running de-dupes on the OS partition VM volume on our NetApp about a month ago. The initial pass of 1.8TB took about 11 days and brought it down to about 650GB actual usage, so a pretty good dedupe ratio. Since then, he's been trying to run dedupes off and on as the change delta percentage starts creeping up but when they fire off they take another 10-11 days to complete which seems way too long. Other volumes containing CIFS shares and upwards of a TB take about 30 minutes or so (but I suspect the block fingerprinting has very little matching, requiring less of the block by block inspection pass, so a very different beast). Both the vswaps and pagefile (inside the boot volumes) reside there as well and he is under the impression that this would be destroying performance. I'm not that convinced since the vswap should be full of zeros since they've never been used and the pagefiles aren't being encrypted or dumped at reboot so that data should be relatively stable. Ideally I would like to move all the pagefiles from SATA to FC and possibly the vswap while I am at it, but we don't have the capacity to handle it right now until some more budget frees up and frankly I'm not convinced this is the source of our problem.

Any thoughts?

I have no idea why your dedupes would be taking that long. I've done a 10TB dedupe job in 24 hours before on SATA disk.

What version of ONTAP are you running? The 8+ builds have a new dedupe version that's better/faster. What type of disk/aggregates is the OS partition VM volume on? Maybe it's on like 4 disks by itself? It doesn't matter if it's CIFS or VMs or swaps etc, it's done at a block-level.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Corvettefisher posted:

SQL plays a large part in how vCenter works, without SQL vCenter for the most part will not work. HA is one way to protect an SQL server, just keep in mind things that can hinder a VM from recovering properly, which is why you might want 2 SQL servers clustered via SQL clustering method. HA does not require vCenter, correct however be familiar with what does require it.

Preferably I would say put SQL server on one, and vCenter another. This will really become noticeable during vCenter upgrades, maintenance, and guest patching. Installing it all on one probably would work, however the little gotcha's you don't realize until after you do it can hurt you in the long run.

Is there any documentation or smart people online that says to split them? I know, and you know, but apparently I work with morons, so...

The 5.1 Best Practices page (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021202) gives hardware specs, but it never really says that you shouldn't put them all in one.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

madsushi posted:

I have no idea why your dedupes would be taking that long. I've done a 10TB dedupe job in 24 hours before on SATA disk.

What version of ONTAP are you running? The 8+ builds have a new dedupe version that's better/faster. What type of disk/aggregates is the OS partition VM volume on? Maybe it's on like 4 disks by itself? It doesn't matter if it's CIFS or VMs or swaps etc, it's done at a block-level.

It's an old 3020c stuck on 7.something unfortunately and got cut out of NetApp's support, the aggregate is backed with 3 whole shelves of SATA disk (combination 320/500GB disk) and other volumes on that exact same aggregate take a fraction of the time. I have no loving clue here and we have a 3rd party company doing the OnTap support for us now but they're just stabbing in the dark from what I can see.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

Is there any documentation or smart people online that says to split them? I know, and you know, but apparently I work with morons, so...

The 5.1 Best Practices page (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021202) gives hardware specs, but it never really says that you shouldn't put them all in one.
SQL Server likes to assume it owns the server it's installed on, and its default configuration will try to use all the resources available to it. This leads to some interesting contention if you end up with a decently-sized buffer pool and some vCenter processes that take up more memory than they should.

You can do it just fine if you actually know SQL Server administration and know what you're doing, but unless you're prepared to play SQL Server DBA instead of just having your VMware environment work, it's often better to just separate out and let the default configuration do its thing.

Bottom line: keep an eye on DBCC MEMORYSTATUS and you're fine.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Misogynist posted:

SQL Server likes to assume it owns the server it's installed on, and its default configuration will try to use all the resources available to it. This leads to some interesting contention if you end up with a decently-sized buffer pool and some vCenter processes that take up more memory than they should.

You can do it just fine if you actually know SQL Server administration and know what you're doing, but unless you're prepared to play SQL Server DBA instead of just having your VMware environment work, it's often better to just separate out and let the default configuration do its thing.

Bottom line: keep an eye on DBCC MEMORYSTATUS and you're fine.

NOPE NO IDEA WHAT WE'RE DOING CAN'T WAIT FOR IT ALL TO BLOW UP.

My motivation wanes a bit more every day on this project. One of the justifications is it was one less OS to keep track of. Well hello, we're virtualizing the gently caress out of everything, we're gonna have a hell of a lot of VMs to keep track of.

E: Also, nobody knows anything about SQL server on staff.

DevNull
Apr 4, 2007

And sometimes is seen a strange spot in the sky
A human being that was given to fly

Yesterday we were cleaning out a guys office that left after being with the company for 10 years. We found this in a box. Anyone know the hardware requirements for installing it?

BnT
Mar 10, 2006

Is anyone out there using IDS or IPS systems within your VMware environment? Specifically, is it possible to SPAN or port mirror traffic between two VMs even if they reside on the same host? I see that Sourcefire has this and wondering if it's possible without that substantial a budget. Would vDS and a Snort VM get the job done?

Pile Of Garbage
May 28, 2007



DevNull posted:

Yesterday we were cleaning out a guys office that left after being with the company for 10 years. We found this in a box. Anyone know the hardware requirements for installing it?


Hardware requirements? Just virtualise it! :science:

Edit: on a more serious note is there any consensus on whether it is a good/bad practice to defragment the contents of a VMDK via the VMs guest OS (i.e. running defrag.exe on Windows Server which is running within a VM)? With normal HDDs (non-SSD) the effectiveness of defragmenting the disk depends upon the OS being aware of the disk geometry. Does the I/O virtualisation layer of ESXi present storage to VMs in such a fashion that the installed guest OS is aware of the disk geometry? I know there are some instances where defragmentation should be avoided (For instance when using SAN devices which utilise block-level copy-on-write snapshotting) but those instances aside what is the rule of thumb?

Pile Of Garbage fucked around with this message at 19:18 on Jan 16, 2013

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

NEVER defragment a VMDK as a general rule, and if you're virtualizing Windows 7 then make sure you disable the background defrag task. The only time I would consider this would be if it was a thick provisioned VMDK and the datastore was either a singular disk or a JBOD array where there is actual sequential addressing for it to optimize for. It might also work against a cheap RAID array but that is a bit of a jump. More expensive RAID controllers are going to know how to optimize themselves for the most part. Thin provisioned VMDKs abstract out the disk geometry so a defrag there is going to jumble things up worse than what you had before.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

cheese-cube posted:



Edit: on a more serious note is there any consensus on whether it is a good/bad practice to defragment the contents of a VMDK via the VMs guest OS (i.e. running defrag.exe on Windows Server which is running within a VM)? With normal HDDs (non-SSD) the effectiveness of defragmenting the disk depends upon the OS being aware of the disk geometry. Does the I/O virtualisation layer of ESXi present storage to VMs in such a fashion that the installed guest OS is aware of the disk geometry? I know there are some instances where defragmentation should be avoided (For instance when using SAN devices which utilise block-level copy-on-write snapshotting) but those instances aside what is the rule of thumb?

Storage vMotion if you are worried about fragmentation, running a defrag on a datastore will zero out the disk(can be fixed) and won't really do anything to help the vmdk.

Disable superfetch and defrag

Here is the optimization guide for windows 7 OS.
http://www.vmware.com/files/pdf/VMware-View-OptimizationGuideWindows7-EN.pdf

movax
Aug 30, 2008

movax posted:

OK, doing some more Googling and reading up, I think I've got some poo poo figured out!

Network:
What CF said, tie WAN via vmnic0 to a vswitch Group, and bridge that through another vswitch (LAN) through a pfSense VM. For some reason my management kernel has currently defaulted to vmnic0, so I can only create the WAN group on vmnic1. I assume I can switch those back around to only use NIC 1/vmnic0 as the input to the pfSense box (Because I'm OCD)

Disks:
I can do local RDM through some semi-hackery, ick. Seeing as I need to get a HBA anyway though:
1x HBA of some type direct passthrough'd to OpenIndiana with disks behind it
1x ~80GB SSD holding a 'system' datastore where ESXi lives, ESXi logs live and say the pfSense install lives. Connected to motherboard SATA.
1x ~80GB SSD holding 2 VMDKs that hold OpenIndiana + CentOS (50/50 split) connected to mobo SATA3 port
1x 256GB SSD given entirely to application hosting CentOS (Note 1) Connected to mobo SATA3 port

Note 1: What's the best way to get to this to the OS that needs it? I can't raw passthrough the single disk without doing some weird stuff in console, use VMFS?

Being noobish but I'll just bump this for an answer to note 1.

Also I'm doing lazy zero thin provisioning on my low-usage VMs, but will use eager zero thick provisioning for the heavier ones. This seems to make sense to me :frogbon:

Frozen Peach
Aug 25, 2004

garbage man from a garbage can
My boss is shopping for new RAM for our VMware hosts, and found some for half the price that Crucial wants, but I've never heard of the brand. Is anyone familiar with Nemix Ram?

http://www.nemixcorp.com/memory-by-manufacturer/ibm/bladecenter/bladecenter-hs21-xm-7995.html

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Frozen-Solid posted:

My boss is shopping for new RAM for our VMware hosts, and found some for half the price that Crucial wants, but I've never heard of the brand. Is anyone familiar with Nemix Ram?

http://www.nemixcorp.com/memory-by-manufacturer/ibm/bladecenter/bladecenter-hs21-xm-7995.html
Whatever has their brand on it is likely to be used/reconditioned RAM from some random manufacturer, especially if there's an actual IBM FRU attached. I've never bought from them personally, but their Amazon storefront has a 4.7/5 rating with 1,667 ratings in the last 12 months, so they're unlikely to scam you.

Pile Of Garbage
May 28, 2007



movax posted:

Being noobish but I'll just bump this for an answer to note 1.

Also I'm doing lazy zero thin provisioning on my low-usage VMs, but will use eager zero thick provisioning for the heavier ones. This seems to make sense to me :frogbon:

If I'm understanding your situation correctly (2 x 80GB SSDs formatted as VMFS datastores containing two VMs and 1 x 256GB SSD that needs to be provisioned exclusively to one of the VMs) then I don't see any issue with formatting it as a VMFS datastore, creating a VMDK on the datastore and then attaching it to the VM. Honestly it's been a while since I've had to work with DAS but that should work. Of course you may run into difficulties migrating/scaling out down the track.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

BnT posted:

Is anyone out there using IDS or IPS systems within your VMware environment? Specifically, is it possible to SPAN or port mirror traffic between two VMs even if they reside on the same host? I see that Sourcefire has this and wondering if it's possible without that substantial a budget. Would vDS and a Snort VM get the job done?

What IDS/IPS are you using?

You can't really SPAN a port unless you're using the functionality in the Nexus 1000v or the vDS since traffic between two VMs on the same host will never actually reach the physical switchport. You have options though! You can put an IDS VM on each ESXi host and make sure they don't vmotion (disable DRS for these VMs). Allow promiscuous mode in the vSwich/portgroup configuration and the IDS should be able to snoop all the traffic on that vSwitch in whatever portgroup it's connected to. If you need multiple portgroups then add multiple NICs/sniffers.

movax
Aug 30, 2008

cheese-cube posted:

If I'm understanding your situation correctly (2 x 80GB SSDs formatted as VMFS datastores containing two VMs and 1 x 256GB SSD that needs to be provisioned exclusively to one of the VMs) then I don't see any issue with formatting it as a VMFS datastore, creating a VMDK on the datastore and then attaching it to the VM. Honestly it's been a while since I've had to work with DAS but that should work. Of course you may run into difficulties migrating/scaling out down the track.

Yeah, I'm not too concerned about scaling out / migration (this isn't corporate at all). Though, I belatedly realized, what about TRIM commands for these SSDs? Obviously if the raw disk itself isn't getting passed through, is it up to ESXi to do that?

I see forum posts asking about it, looks like my best bet may be a drive with built-in garbage collection.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

movax posted:

Though, I belatedly realized, what about TRIM commands for these SSDs? Obviously if the raw disk itself isn't getting passed through, is it up to ESXi to do that?
TRIM's just plain not gonna happen. On enterprise arrays (FusionIO, Violin, etc.), SSD erase commands happen automatically in the background. Not so for standalone SSDs. You'll have to live with the slowdown.

OnceIWasAnOstrich
Jul 22, 2006

Sorry if this is a dumb question. I'm trying to use ESXI 5.1 to virtualize a couple of machines we use in our research lab. I've got two physical NICs. I originally put the management interface on vmnic2, using the IP 10.1.185.4, and everything is great. I wanted to add a VMKernel port so I can hook this up to our NAS with ISCSI. I add a VMKernel port to vmnic1 and connect using the IP 10.0.185.2. Suddenly my client loses connection, and I can no longer connect to the management interface at 10.1.185.4...but I can connect to 10.0.185.2.

Have I done something horribly wrong, or is it intended that this happen?

OnceIWasAnOstrich fucked around with this message at 17:41 on Jan 18, 2013

Tax Oddity
Apr 8, 2007

The love cannon has a very short range, about 2 feet, so you're inevitably out of range. I have to close the distance.
This may be a stupid question, but I can't find a good answer using Google. I have disabled defragmentation in every Windows guest OS since it is not meant to be of any use, but what about Linux operating systems? Should the occasional fsck on boot be disabled as well?

Pile Of Garbage
May 28, 2007



OnceIWasAnOstrich posted:

Sorry if this is a dumb question. I'm trying to use ESXI 5.1 to virtualize a couple of machines we use in our research lab. I've got two physical NICs. I originally put the management interface on vmnic2, using the IP 10.1.185.4, and everything is great. I wanted to add a VMKernel port so I can hook this up to our NAS with ISCSI. I add a VMKernel port to vmnic1 and connect using the IP 10.0.185.2. Suddenly my client loses connection, and I can no longer connect to the management interface at 10.1.185.4...but I can connect to 10.0.185.2.

Have I done something horribly wrong, or is it intended that this happen?

I suspect you've caused a routing issue by having two VMkernel ports with IP addresses in the same subnet attached to two separate vmnics. Can you login to the host via the service console or SSH and run the command "esxcfg-route -l" and then post the output?

Edit: Also ideally you should be segregating iSCSI traffic by having it on a desperate subnet and VLAN.

Kachunkachunk
Jun 6, 2011

movax posted:

Hmm, I got the vSphere client installed and according to KB 1017530, I can't even use my local drives for RDMs. So what's the best way to get my 4x2TB friends exported to a guest OS for SW RAID purposes? 4 separate datastores?

I'm going to throw in a little 1TB drive as a place to drop the actual VMS on for testing in a bit.

e: I will also try to find a reliable noob guide to help me out here as well.
You can create the mappings yourself in the command-line. This is your only option if you want a Guest to have access to 3TB disks (despite upper limits in ESXi being far greater than 2TB, VMDKs can still only be 2TB minus 512B.

For each of your disks, do:
vmkfstools -z /vmfs/devices/disks/<disk> /vmfs/volumes/<datastore>/<vm>/<disk name>.vmdk

Then go to your VM and add the "existing" disk in question. The above -z switch does passhtru, but at 2TB and smaller, you can do virtual mode RDMs if you still want snapshot capabilities. I am thinking, however, that you really do need it to be 2TB minus 512 bytes. There's a small amount of overhead to account for, in delta disks.

Sylink
Apr 17, 2004

Nev posted:

This may be a stupid question, but I can't find a good answer using Google. I have disabled defragmentation in every Windows guest OS since it is not meant to be of any use, but what about Linux operating systems? Should the occasional fsck on boot be disabled as well?

I was under the impression you don't need to defrag under linux.

BnT
Mar 10, 2006

1000101 posted:

You can't really SPAN a port unless you're using the functionality in the Nexus 1000v or the vDS since traffic between two VMs on the same host will never actually reach the physical switchport.

Aweseme, thanks. That makes a lot of sense. It seems like it might be easier to upgrade to vDS switching rather than deploy a bunch of sensors and make sure they're on the right host all the time.

Tax Oddity
Apr 8, 2007

The love cannon has a very short range, about 2 feet, so you're inevitably out of range. I have to close the distance.

Sylink posted:

I was under the impression you don't need to defrag under linux.

... I don't know why I thought fsck was for defragging, I guess I have been away from linux for far too long. Is turning off fsck recommended for thin-provision file systems anyhow, in order to prevent the file systems from growing to their maximum size? Or is fsck so important that it should not be disabled?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Nev posted:

... I don't know why I thought fsck was for defragging, I guess I have been away from linux for far too long. Is turning off fsck recommended for thin-provision file systems anyhow, in order to prevent the file systems from growing to their maximum size? Or is fsck so important that it should not be disabled?

Don't disable fsck, there really is no need to. Expansion of a thin disk will happen when data is written to disk, fsck only reads unless it need to repair I believe.

Dilbert As FUCK fucked around with this message at 19:41 on Jan 18, 2013

Adbot
ADBOT LOVES YOU

Tax Oddity
Apr 8, 2007

The love cannon has a very short range, about 2 feet, so you're inevitably out of range. I have to close the distance.
Good to know, thank you!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply