Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

Stealthgerbil posted:

Is an SSD essential for a server where I plan on running a bunch of VMs? I currently have two SATA drives in raid 1 and for like $200 I could throw in two 120gb SSDs in raid 1. My server only has room for 4 drives because its only 1u and I wanted to have two 2TB drives and two SSDs.

Also the server is a cheap Dell CS24-TY and it doesn't support software raid with ESXi. Really lame :( I suppose I can just buy a raid card.

Please tell me this is for your home lab and not anything important :smithicide:

Anyway, the answer to these questions is gonna keep being "it depends on what the VM's are doing" and what you mean by "a bunch". If there's any amount of disk I/O two SATA drives in RAID 1 is gonna be slow as balls. If you're just booting up a linux box to practice configuring Apache or something it's probably fine.

Adbot
ADBOT LOVES YOU

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Docjowles posted:

Please tell me this is for your home lab and not anything important :smithicide:

This is exactly what I was thinking.

Are there some people in here who work for VMware (if I recall correctly)?

If so, is there any chance I get get someone to look over my organizations licensing (or a contact with someone in the licensing department who knows their poo poo)? CDW (our reseller) seems to have no idea and everything is currently a big mess due to old purchases, upgrades and the idiots before. I have inherited a clusterfuck and need to get this straightened out.

Will gladly pay for assistance in with as many shots as you can drink (must be present in the mountains in Colorado to redeem). :)

Edit: Its almost all View licensing if that makes a difference.

Moey fucked around with this message at 18:02 on Jul 26, 2013

Docjowles
Apr 9, 2009

I'll preface this by saying I have no experience with the product, but today only Symantec is giving away free copies of Backup Exec V-Ray VM backup software in honor of Sysadmin Appreciation Day. See this tweet for details.

Moey posted:

Will gladly pay for assistance in with as many shots as you can drink (must be present in the mountains in Colorado to redeem). :)

He's not kidding. Moey bought me like 3 shots in half an hour when we hung out and I hadn't even fixed anything for him!

Stealthgerbil
Dec 16, 2004


Docjowles posted:

Please tell me this is for your home lab and not anything important :smithicide:

Anyway, the answer to these questions is gonna keep being "it depends on what the VM's are doing" and what you mean by "a bunch". If there's any amount of disk I/O two SATA drives in RAID 1 is gonna be slow as balls. If you're just booting up a linux box to practice configuring Apache or something it's probably fine.

It is for a home lab/file server/minecraft server. Its actually pretty fast, it has two quad core Xeon x5570s and 32gb of DDR3 ram which is why I picked it up. It has been fine for setting up windows server and debian VMs. I ran benchmarking software and it scored decently but I haven't gotten a chance to put a real load on it. I rent servers at the moment for my important stuff but I want to switch to owning my own hardware and colocating which is why I purchased a cheap server to learn and mess with.

Why would it be a bad idea for anything important though, just slow speeds from raid 1? I am not super familiar with the best way to set up storage for a basic server and thought I need at least some basic redundancy. I probably should just do a raid 10 but I am a poor and wanted to only drop like $200 on it. I only paid $300 for the server in the first place.

Stealthgerbil fucked around with this message at 23:23 on Jul 26, 2013

evil_bunnY
Apr 2, 2003

Stealthgerbil posted:

Is an SSD essential for a server where I plan on running a bunch of VMs?
Only if you value your sanity.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast
I saw some AV talk.

I use F-Secure. I recommended it based on one recommendation in a thread on SH/SC ages ago.

For about a year so far. I wish the support process wouldn't instantly push you to India until getting something useful.. and I've had a few problems.. but.. none of them seem to involve slowing the machines down or making them unmanageable.

No AV solution seems to be perfect, but I chose F-Secure based on multiple criteria. Microsoft Forefront comes nowhere near in detection tests, F-Secure is not terrible to administer, so I can't complain too much.

If you get through to Finland for support, know what you're doing, and want multiple products at once (including a Linux based proxy server) then F-Secure isn't a bad deal.

If anyone from F-Secure ever reads this: clear up support channels. Badly. I mean, badly. Hire a shitload of people in Finland, fire all in other call centres, and make all the numbers directly there.

You have a promising product, but support and documentation will ruin it. Another example: the most recent Linux proxy doesn't have a web interface! Don't go on holiday! Finish the web interface! Jesus Christ! Don't release half finished things.

HalloKitty fucked around with this message at 00:33 on Jul 27, 2013

Demonachizer
Aug 7, 2004

evil_bunnY posted:

Only if you value your sanity.

A RAID 5 SAS SAN isn't up to snuff?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

demonachizer posted:

A RAID 5 SAS SAN isn't up to snuff?

RAID 5 isn't without it's drawbacks. Look up at ZFS and L2ARC, its really great, an SSD is really worth it.

Not to mention most vendors are using a similar architect with SSD caching for a bunch of things. EMC has a nice pdf of how fast cache

Pile Of Garbage
May 28, 2007



Dilbert As gently caress posted:

RAID 5 isn't without it's drawbacks. Look up at ZFS and L2ARC, its really great, an SSD is really worth it.

Not to mention most vendors are using a similar architect with SSD caching for a bunch of things. EMC has a nice pdf of how fast cache

IBM have a similar feature in the Storwize V7000: Easy Tier.

Docjowles
Apr 9, 2009

Stealthgerbil posted:

It is for a home lab/file server/minecraft server. Its actually pretty fast, it has two quad core Xeon x5570s and 32gb of DDR3 ram which is why I picked it up. It has been fine for setting up windows server and debian VMs. I ran benchmarking software and it scored decently but I haven't gotten a chance to put a real load on it. I rent servers at the moment for my important stuff but I want to switch to owning my own hardware and colocating which is why I purchased a cheap server to learn and mess with.

Why would it be a bad idea for anything important though, just slow speeds from raid 1? I am not super familiar with the best way to set up storage for a basic server and thought I need at least some basic redundancy. I probably should just do a raid 10 but I am a poor and wanted to only drop like $200 on it. I only paid $300 for the server in the first place.

In reliability terms it's fine, especially for a home server. The issue will be that it'll be dog slow if you're taxing the disks with IO in any way. SATA drives are slow. RAID 1 is slow (for writes). Having no hardware RAID card is slow. Combine them all and it's gonna take you an hour just to make a copy of a VM, especially if the Minecraft server is already humming along or you're streaming the latest My Little Pony episode from the media server VM. The fact that it performed great in some random CPU benchmark doesn't mean much if the box can only provide 10 iops.

Install ESXi on a USB thumb drive and then invest in a decent SSD to host the VM's. If you want bulk storage for your file server, sure, put that on a couple SATA drives.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
my home VMware lab is backed by 6x 7200 drives in raid-6. It's plenty fast for me. It's not like I am stressing multiple servers at once like one would see in a production environment.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
I run a DS212J in my home lab with two 2TB WD Red drives in it. Works well enough for me.

Frozen Peach
Aug 25, 2004

garbage man from a garbage can
I've finally stabilized my VDP appliance so that it's properly removing old backups and is sitting at what I'd consider a good usage percent (83-85%) but apparently VMware thinks that having less than 20% space on that VM is too little. Is there a way to change the values that it throws an alarm so that the "VDP appliance is nearly full" alert alarm only gets triggered at 90%, and the actual warning alarm at 95%, instead of the 80% that it's set for now? Editing the alarm doesn't appear to have a way to choose the actual percent, and clicking on "advanced" doesn't give me much help in knowing what to actually type in to have more specific conditions.

Is there a reason I shouldn't let my VDP appliance sit at 85% full? I've mostly just been ignoring the alarm, but I'd prefer to have it trigger at a higher % if possible.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I can't say I would want to run a backup program with less than 20% free space, that way for OH poo poo moments you covered. What size "block" are you using of the VDP .5TB, 1TB, 2TB?

Frozen Peach
Aug 25, 2004

garbage man from a garbage can
We're using the .5TB one. I could probably switch it to a bigger appliance, but I don't think that's a huge issue. We only have 17 VMs on two hosts, and there's no reason any of our VMs would balloon so big that we'd suddenly need more storage on it. I'm currently backing up 7 days worth of VMs, and once a month archiving the VDP appliance on an off site backup.

I'm pretty new to this backup strategy thing though, so maybe there's something I'm missing and not aware of it.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

Frozen-Solid posted:

We're using the .5TB one. I could probably switch it to a bigger appliance, but I don't think that's a huge issue. We only have 17 VMs on two hosts, and there's no reason any of our VMs would balloon so big that we'd suddenly need more storage on it. I'm currently backing up 7 days worth of VMs, and once a month archiving the VDP appliance on an off site backup.

I'm pretty new to this backup strategy thing though, so maybe there's something I'm missing and not aware of it.

I've experienced the same thing but never ran into a fix. I've also ran into some weird problems where it simply stops backing up some VMs for some odd reason...

evil_bunnY
Apr 2, 2003

Goon Matchmaker posted:

I've experienced the same thing but never ran into a fix. I've also ran into some weird problems where it simply stops backing up some VMs for some odd reason...
Enterprise software ladies and gents.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

evil_bunnY posted:

Enterprise software ladies and gents.

More like VMWare is using schmucks like me for beta testing for the enterprise grade version of VDP. Oh well at least it's only DEV. We're using Veeam elsewhere, not that it's doesn't have it's own problems.

Tequila25
May 12, 2001
Ask me about tapioca.
The boss just approved $50K USD for my VM infrastructure upgrade.

I inherited 3 separate VMWare ESX 4.0 hosts running on a mishmash of different hardware. Two share an iSCSI Poweredge MD3000i with 15x1TB 7.2K SAS, one has a local RAID. Connecting it all is a 24 port Cisco Catalyst 3850G 1GB switch. We host about 2 dozen VMs, mainly backup domain controllers, file and print servers and a few small web servers. There is one small SQL server for Sharepoint that's kinda busy, but most have pretty low requirements.

We want to upgrade to a full vCenter cluster with vMotion and the nice management bells and whistles. We basically want as must uptime as possible. I'm specing out 3 hosts, one SAN and a 10GB switch.

We are committed to using Dell hardware, since they are our biggest client and we get pretty good discounts because of that. I'm looking at 3 Poweredge R520 servers with about 6x2 cores and 96GB each, a PowerVault MD3600i with 12x2TB 7.2K SAS, and a 24 port 10Gbit ethernet switch, with the VMware vSphere 5 Enterprise Acceleration Kit for 6 processors.

Does this sound like a good plan? Should I invest in faster drives or tiered storage? Should I upgrade to the R720 servers? Is 10GB ethernet worth it? Should I go with VMWare Ent+? Any of the oldd stuff worth reusing, like maybe the old Powervault MD6000i?

Also, I've never set up my own VMWare cluster before, but I helped set up a Citrix XenServer cluster serval years ago. Should I got out and buy a book to study this or can I manage with the online resources?

Thanks!

Docjowles
Apr 9, 2009

If the boss has approved $50k, by all means get them to kick in $80 for a couple decent books :)

Do you ever expect your environment to grow past 3 hosts? If not, the first thing that strikes me is that you could save an assload of money on licensing by using the Essentials Plus kit. Your setup really does not sound demanding enough to justify paying for straight-up Enterprise Plus. And if you're dropping that kind of cash ask your VAR for help sizing the hardware, they'll be more than happy to help benchmark things like IOPS and network requirements.

Edit: and what Caged said. Use the savings to buy better storage that can run 15k SAS disks, more CPU cores, and more RAM. There's very little chance you'll end up CPU limited but you will ALWAYS want more RAM and storage.

Docjowles fucked around with this message at 17:48 on Aug 2, 2013

Thanks Ants
May 21, 2004

#essereFerrari


^ That. Essentials Plus seems designed for what you want it for, unless you need things like Fault Tolerance, Storage vMotion, DRS, DPM etc then I'd stick with Essentials Plus and save a ton of cash. The R520s are great little boxes and very well priced, but I'd get the dual 8 core CPUs since Essentials Plus limits you on socket count.

Use the cash you've saved to move to 15k SAS or a unified box with SSD caching if it will stretch that far.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Your budget is low. The VMware license alone is going to run you close to 20K. List on that MD3600 with dual controllers and 12 drives is about 22K. You might get 40% off that. I didn't even estimate the servers, but those are at least 8K each as well. Haven't touched networking and you're at 60K already.

You can save a ton by going with a Standard Acceleration kit. 10GB networking isn't needed either for your workload, I would stick with the 1G.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Tequila25 posted:

The boss just approved $50K USD for my VM infrastructure upgrade.

I inherited 3 separate VMWare ESX 4.0 hosts running on a mishmash of different hardware. Two share an iSCSI Poweredge MD3000i with 15x1TB 7.2K SAS, one has a local RAID. Connecting it all is a 24 port Cisco Catalyst 3850G 1GB switch. We host about 2 dozen VMs, mainly backup domain controllers, file and print servers and a few small web servers. There is one small SQL server for Sharepoint that's kinda busy, but most have pretty low requirements.

We want to upgrade to a full vCenter cluster with vMotion and the nice management bells and whistles. We basically want as must uptime as possible. I'm specing out 3 hosts, one SAN and a 10GB switch.

We are committed to using Dell hardware, since they are our biggest client and we get pretty good discounts because of that. I'm looking at 3 Poweredge R520 servers with about 6x2 cores and 96GB each, a PowerVault MD3600i with 12x2TB 7.2K SAS, and a 24 port 10Gbit ethernet switch, with the VMware vSphere 5 Enterprise Acceleration Kit for 6 processors.

Does this sound like a good plan? Should I invest in faster drives or tiered storage? Should I upgrade to the R720 servers? Is 10GB ethernet worth it? Should I go with VMWare Ent+? Any of the oldd stuff worth reusing, like maybe the old Powervault MD6000i?

Also, I've never set up my own VMWare cluster before, but I helped set up a Citrix XenServer cluster serval years ago. Should I got out and buy a book to study this or can I manage with the online resources?

Thanks!

I would look into vSphere Standard, it has HA/vMotion/Storage vMotion/VUM/vDP/Ram and CPU hot add/Op's manager, or at the very least Essentials Plus. You are riding on 50K and Enterprise will take a big chunk of that right out if you get any kind of support.

For storage look for teiring, if you go with the MD3600i. Look into going with not by GB or TB size but IOPS, SSD chaching is a great way to leverage this but at the end of the day you'll be bottlenecked by what is on your backend.
Depending your size requirements you may want to look into something like this:

MD3220i
4x450GB 15k SAS RAID 0+1
5x600GB 10K SAS RAID 5
6x1TB 7.2k NL-SAS RAID 6
NOTE: You could also throw in some SSD's if you wanted to

Then carve out luna's based on the Drive array type, and set the appropriate VM on the lun according to to their IOP requests. Configured this comes out to ~12K MSRP for the dual controller setup, and Gig/E.

For switching I don't see why you should go with 10GB unless you know you are going to use it and currently fully saturating a gig network. Honestly you can get very far with 1Gb/s in a properly designed virtual environment, 10GB is great but costly. Personally I would look into 2 2960's or 2 3750's with a stacking cable, depending on your budget and VAR. Be sure to get at least 2 switches for redundancy.

For hosts, I would spec them according to what your environment needs, capacity planner is great for this. The R420's are good dual socket motherboards, you could easily get 3 dual socket 6c E5's @ 2.2Ghz, 64GB of ram, 6 1Gb/s network uplinks, for around 3K MSRP a pop, again you'll need to see what your host ram and cpu requirements are.


I can't vouch for Citrix's Server virtualization other than I am not a fan of it, their VDI delivery is good but they Hypervisor I am still ehh about.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Dilbert As gently caress posted:

Be sure to get at least 2 switches for redundancy.
I am calling this out so it doesn't get missed, because anyone who runs a production VMware environment on a single [non-chassis] switch is doing something incredibly stupid!

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

Misogynist posted:

I am calling this out so it doesn't get missed, because anyone who runs a production VMware environment on a single [non-chassis] switch is doing something incredibly stupid!

I'm this guy because management doesn't give a poo poo.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Misogynist posted:

I am calling this out so it doesn't get missed, because anyone who runs a production VMware environment on a single [non-chassis] switch is doing something incredibly stupid!

Let's keep this resonating.

Goon Matchmaker posted:

I'm this guy because management doesn't give a poo poo.

Old environment was like that. Looks like next month I'll be working on doing some converged networking with them and actually have switching redundancy!

hackedaccount
Sep 28, 2009
Be sure to work with your facilities guy to figure out how the power is laid out too. If at all possible plug things in so if Circuit A fails it only brings down 1 switch and 2 computers instead of all switches and computers.

You probably don't have a super fancy redundant power setup but just knowing to plug 1/2 the poo poo in here and the other 1/2 over there can save your rear end if there's a failure.

Daylen Drazzi
Mar 10, 2007

Why do I root for Notre Dame? Because I like pain, and disappointment, and anguish. Notre Dame Football has destroyed more dreams than the Irish Potato Famine, and that is the kind of suffering I can get behind.
Well, I built a brand new ESXi box with an i7-4770 and a measly 16GB of RAM (but there's room for growth when I save up another $130 next month to buy the other 16GB). I went the cheap route and used a 320GB WD Blue drive as my primary datastore (don't rag on me for it, it's not like thing is going to see a poo poo-ton of use that would justify an SSD), with a second 320GB WD Blue that I plan to use to backup my primary datastore. I would have just paired them up in a RAID1, but I don't have a RAID controller sitting around and didn't feel like blowing $300 for one at this time - maybe in a few months.

I went ahead and pulled the trigger on buying 2x2TB Seagate NAS drives because Newegg was offering a deal that priced them at $99.98 each (normally $129.98). Too good an offer to resist.

Anyways, my plan is to have at a minimum 2 VMs - Smoothwall firewall, and Linux Slackware file server. The configuration I chose gives me lots of room to grow as my experience with ESXi grows and I find new or cool poo poo I want to try out. BTW, I am taking suggestions for additional VMs if anyone wants to add their $.02

I'm going to use the Seagate drives as file storage for my linux VM - my roommate mentioned something about RDM and is helping me with the configuration (we currently have an ESXi box that he set up a year or two ago and I'm setting my box up based upon some of the lessons he learned during the process of configuring his box). Pretty excited about the whole thing, but if anyone has some pointers about this poo poo I could sure use them.

Kachunkachunk
Jun 6, 2011
If you're interested in learning more about ESXi, it's relatively safe to create nested ESXi servers for testing purposes.
Aside, that sounds great. I do something similar but it'll turn into a bit of a dick-waving post if I start getting into detailing the configuration.

NIC teaming could be worth your while, depending on how busy things get, but usually contention is at the RAM (only 16GB), disks, and networking, in that order. Sometimes disks first.
Each of my boxes have 32GB of RAM and it felt tight from time to time. But I do play around with a lot of VMware products for learning/testing.

Hm another idea I can put your way is to use the Linux kernel's inclusion of SSD caching (or to use bcache). This'll help with bursts of I/O that maybe the mechanical disks are not handling. You'd just need one (fast) SSD. Or put everything on the SSD and be done with it.

Daylen Drazzi
Mar 10, 2007

Why do I root for Notre Dame? Because I like pain, and disappointment, and anguish. Notre Dame Football has destroyed more dreams than the Irish Potato Famine, and that is the kind of suffering I can get behind.
Well, pretty jazzed with my progress so far. Got ESXi up and running, installed Smoothwall and got it running (my definition of running is that I was able to finally access the drat thing through a web console), installed Slackware 14 and attached the 2x2TB NAS drives for file storage, and also installed Windows Server 2012 for shits 'n giggles (actually, I did it because it was the only sure-fire way I figured would allow me to use the Smoothwall web console without a lot more reading and configuring - baby steps for now).

Next step is to get my spare physical box up and running so I can install something like Windows 7 on it and put MediaPortal on it to access the file server (which will require configuring Samba, although I'm thinking maybe another Linux install would be a better choice). So many choices - the possibilities are agonizingly entertaining.

evil_bunnY
Apr 2, 2003

If you've never touched Linux before Slackware may not be the best of ideas.

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

evil_bunnY posted:

If you've never touched Linux before Slackware may not be the best of ideas.

On the contrary, I'd say it's a fantastic idea. You'll be able to figure out what awful broken things they've jammed into the latest Ubuntu/Fedora release and be able to work around/disable them. It's going to be an uphill battle, but it will be worth it.

Also I'd like to plug FreeBSD just because it will give you similar exposure with a bit less uphill battle.

Daylen Drazzi
Mar 10, 2007

Why do I root for Notre Dame? Because I like pain, and disappointment, and anguish. Notre Dame Football has destroyed more dreams than the Irish Potato Famine, and that is the kind of suffering I can get behind.

evil_bunnY posted:

If you've never touched Linux before Slackware may not be the best of ideas.

I've been messing around with Slackware since I think version 8 or 9. My friend explained that I would learn a lot more that way, and I can't disagree. Setting up Slackware this time was a breeze compared to previous installs I've done over the years, and KDE was almost anti-climatic to get running - startx and off we went. It helps that my friend has been using Linux for over 20 years, so anything I screw up he can fix, but he prefers to just offer pointers and hints and let me figure it out.

Hadlock
Nov 9, 2004

How responsive is virtualization to additional logical cores? If I am running Hyper-V under WS2012 R2 would I see an improvement in responsiveness in a VM lab setting if I went from a quad core i5 to an 8 (logical) core i7 haswell? I'm looking at running probably 4 VMs.

evil_bunnY
Apr 2, 2003

Daylen Drazzi posted:

anything I screw up he can fix, but he prefers to just offer pointers and hints and let me figure it out.
It's the best way to learn, too.

evil_bunnY
Apr 2, 2003

Depends on too many things to count, but probably not much.

Hadlock posted:

How responsive is virtualization to additional logical cores? If I am running Hyper-V under WS2012 R2 would I see an improvement in responsiveness in a VM lab setting if I went from a quad core i5 to an 8 (logical) core i7 haswell? I'm looking at running probably 4 VMs.

Daylen Drazzi
Mar 10, 2007

Why do I root for Notre Dame? Because I like pain, and disappointment, and anguish. Notre Dame Football has destroyed more dreams than the Irish Potato Famine, and that is the kind of suffering I can get behind.

evil_bunnY posted:

It's the best way to learn, too.

Speaking of which, I completely goofed when making the RAID-0 array and my friend had to fix it while I watched on my computer (we were sharing a PuTTY screen, which is pretty cool). I sat taking notes for most of the session. The file server pretty much screams, averaging over 275 MB/s read and write speed when we did a speed test on the new RAID. My friend was pretty impressed since it was about 50 MB/s faster than his system. I almost clapped like a giddy little schoolgirl.

My plan at this point is to completely blow everything away next weekend and do it all over again - it's actually quite addictive once you get past the frustration of not having any clue where to go next. I think I might write up something from the perspective of a complete newbie - the reference pages I used didn't mention some very elementary aspects of disk management that I'm sure are very obvious to experienced users, but because they were left out caused me to screw the configuration up. Still, a good learning opportunity, but I doubt if others might be so philosophical about it when they don't have their very own Linux guru to pester.

KS
Jun 10, 2003
Outrageous Lumpwad
Alarm bells are ringing. RAID-0 is a stripe, not a mirror, and the faster performance makes sense there too. That doesn't afford you any kind of data redundancy, which you had stated was the point.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Hadlock posted:

How responsive is virtualization to additional logical cores? If I am running Hyper-V under WS2012 R2 would I see an improvement in responsiveness in a VM lab setting if I went from a quad core i5 to an 8 (logical) core i7 haswell? I'm looking at running probably 4 VMs.
The answer is, as always, it depends. In general, the hypervisor is very good at scheduling and you will possibly see a loss of performance with hyperthreading. The reality is that you are unlikely to see much of a difference unless you are under contention.

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad
I feel like I'm missing something completely obvious here. Is there no way to set a VLAN tag for an independent hardware iscsi adapter through the GUI? Even though you can set IP address and everything else?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply