|
I run 4500 VMs on NFS because dedupe, also dedupe and cheaper because of dedupe. Some of our development databases (SQL, Sybase and Oracle) have also been moved over for the same reasons. That being said, we're looking at potentially cheaper storage solutions like DAS for certain kinds of desktop deployment. NFS will never replace DMX/VMAX for us on the server side, but nothing else is coming close on price/performance/manageability (according to the beancounters) to the pile of 6240s we're amassing. And it's getting towards being a loving pile, which is a different problem. We run it on 10GbE on Cisco Nexus though, so this may make a difference over your average small implementation, but it's a lot easier for us to manage right now than the legacy FC environment.
|
# ? Jul 9, 2012 23:48 |
|
|
# ? May 11, 2024 16:24 |
|
When people talk about NFS and dedupe vs iSCSI and dedupe, the key thing is what VMWare sees. If you have a volume and a LUN via iSCSI and you get great dedupe, that extra space can only be utilized by the SAN for things like snapshots. The space isn't actually made available to the LUN and the underlying OS (VMWare). If you have an NFS volume and you get great dedupe, VMWare sees that free space, so it can utilize that space to oversusbscribe your storage. Now you're actually using all that free space! Plus NFS was the only way to exceed 2 TB datastores prior to the recent ESXi 5.
|
# ? Jul 10, 2012 04:55 |
|
madsushi posted:When people talk about NFS and dedupe vs iSCSI and dedupe, the key thing is what VMWare sees. This seems like a myopic and virtualization-admin-centric way of viewing things. If the SAN is deduping and aware of that ratio, that allows you to create more LUNs and more datastores; there's no reason to look at this from a one-LUN standpoint, is there?
|
# ? Jul 10, 2012 06:18 |
|
Mierdaan posted:This seems like a myopic and virtualization-admin-centric way of viewing things. If the SAN is deduping and aware of that ratio, that allows you to create more LUNs and more datastores; there's no reason to look at this from a one-LUN standpoint, is there? It's definitely the virtualization-admin way of thinking, see thread title. So: you just deduped your 2TB volume down to 1TB. With NFS, the story ends here, as VMWare sees 1TB of free space. Yay! So: you just deduped your 2TB volume with a 2TB LUN in it down to 1TB. You have 1TB of space in the volume. With iSCSI, you're now left with: *Shrink the volume *Make a new volume with the shrunk space *Make a new LUN in the new volume *Move VMs to that new LUN (which sucks w/o sMotion) *Enjoy your lower dedupe ratio since everything isn't in the same volume and increased management since now you have 2 volumes and 2 LUNs to manage *Make a new, smaller LUN in that volume *Move your VMs to that new LUN (which sucks w/o sMotion) *Dedupe again *Repeat several times until your volume is actually "full" *Enjoy your several miscellaneous-sized LUNs that will cause you all sorts of fun For VMWare, there are very clear advantages to just using a huge NFS datastore to host all of your VMs: better dedupe ratios, easier management, fewer constraints.
|
# ? Jul 10, 2012 06:26 |
|
For sure - my perspective is always the small shop admin view on things, where I'm doing both storage and virtualization. I'm being moved from NetApp to Compellent storage now, so I've had to deal with the actuality of deduping on the file level with NFS and having ESXi be aware of that ratio, or deduping at the block level and leveraging the ratio in your volume layout. There's no huge difference (for me) but if you're in a position where your storage is hands-off and you can just request gently caress-off huge NFS exports, then I definitely see the appeal.
Mierdaan fucked around with this message at 06:37 on Jul 10, 2012 |
# ? Jul 10, 2012 06:31 |
|
FISHMANPET posted:So is there a reason to choose NFS datastores over something block based? It can be as good as block as far as I can tell, but it seems like it just started as an afterthought and snowballed from there, and when starting from scratch there's no reason to choose NFS if you have iSCSI or FC. This seems really backwards -- up until VAAI leveled the playing field NFS was flat-out superior at scaling out, and the five biggest ESX deployments I saw while doing 3.5 and 4.0-era consulting all used NFS storage. Performance is probably equal on 10g fabrics, but NFS just wins from a management perspective in my book. It's historically easier to run NFS over a converged infrastructure. In some cases you save using 10 gig NICs instead of CNAs, and you don't have to worry about the 3 driver installs it takes to get a QLE8242 working in ESXi 5. Even now I'd put Netapp running NFS up against anything I can think of for ease of management. Before 5.0 I was juggling ~45 2TB datastores. Not fun. KS fucked around with this message at 16:22 on Jul 10, 2012 |
# ? Jul 10, 2012 16:19 |
|
It's still a little bit iffy for critical workloads in my book as long as it doesn't support proper MPIO. Yeah, link bonding whatever, but that only gets you so far -- you can't have two independent fabrics.
|
# ? Jul 10, 2012 17:17 |
|
In the UCS world I now live in, no matter if I choose FC, 10g iscsi, or NFS, it gets plugged into an interface on each of a pair of Nexus switches in VPC mode. Since we no longer have completely independent fabrics, maybe it's less of a difference. Doing your redundancy at layer 2 also means not having to manage MPIO on the physical servers, which is probably another point in favor of NFS for manageability.
|
# ? Jul 10, 2012 17:46 |
|
So onto another question, to hard drive or not to hard drive? I guess the root of my question is what happens to an esxi host running off of SD card but with swap space on a local hard drive if the hard drive dies (and therefore swap space disappears). And then any thoughts on installing on flash media vs installing on a hard drive?
|
# ? Jul 10, 2012 18:05 |
|
FISHMANPET posted:So onto another question, to hard drive or not to hard drive? Hard drive, unless you are doing +20 hosts where the $200x20+x=4000 makes a difference. My teacher loves auto deploy+host profiles for central manageability and fast redeploys hell he doesn't even buy warranties for his servers anymore, due to him buying cheap servers he could just rebuy it later on or spend that budget. I like that idea a lot but I still feel comfortable with an HDD on there, that way if my vcenter shits itself, or TFTP is down, and a host or two power cycle I still have HA going for me. Not to mention once an Auto Deploy'd host cuts power the logs for that are gone unless you set up a syslog server. On top of that what servers don't come with at least 1 hard drive in the server already(excluding blades). As for installing to USB yeah, if the vendor gives you the option for ordering a diskless server, I would go that way. How to: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004784 It is also good to keep in mind for host swapping which should never happen but if it needed to for some reason HDD would be best.
|
# ? Jul 10, 2012 18:23 |
|
So here's a new diagram, not testing the configuration maximums: We're looking at Dell servers, and they all have the ability to run ESXi off of SD cards. In fact, it looks like if we get VMware directly through Dell, it's required (at least it is when you configure the server, who knows what our Rep can do). It makes sense to have one hard drive for swap and other scratch stuff. I know that I can't join an HA cluster without a configured swap space. So I'm wondering what happens to a running server if the local hard drive dies and it loses its swap space. Is it worth it to spend the extra for a second hard drive and possibly a RAID card. And at that point is it worth it to spend the extra for SD card (which can come out to more than the price of a hard drive).
|
# ? Jul 10, 2012 19:10 |
|
You can buy 4gig SD cards at staples for $7. Dell's customized ESX is downloadable as well, so preinstall only saves you a few minutes, and those SD cards are ridiculously expensive. Not sure what swap space you're talking about. If you're talking about swap to host cache, you're better off with a single SSD. You can forego this if you have headroom on the hosts. It's a fairly new feature and only kicks in for a situation that you want to avoid anyways. If you're talking about virtual machine swap files, and you store them separately from the VM for some reason, you want that to be redundant and fast. However, you really probably don't want to be doing this either unless you're replicating the VMs and want to avoid replicating the swap file. Even then, you're better off putting swap on shared storage for vmotion purposes. edit: That diagram represents 10g links? Is your storage iscsi-based? Putting storage, networking, and vmotion over the same wire requires some careful planning. Be sure to do your homework and get the right network adapters for your environment. KS fucked around with this message at 20:09 on Jul 10, 2012 |
# ? Jul 10, 2012 20:03 |
|
KS posted:You can buy 4gig SD cards at staples for $7. Dell's customized ESX is downloadable as well, so preinstall only saves you a few minutes, and those SD cards are ridiculously expensive. I'm talking about Hypervisor swap. Normally I wouldn't care because if the hypervisor is doing swapping then holy gently caress hold onto your butts, but I've read that you can't join a server to an HA cluster unless it has local hypervisor swap, so I'm not sure what the loss of that swap would do to a machine.
|
# ? Jul 10, 2012 20:08 |
|
FISHMANPET posted:I've read that you can't join a server to an HA cluster unless it has local hypervisor swap I think that's really outdated -- you had to turn it on back in 2.5. Nowadays even on an SD card it sets up a 1GB swap space on the card and can join fine, but god help you if it ever gets into a swapping situation. Have never run into that error from 4.0 till now. e: vvv It'll probably never even be a problem. KS fucked around with this message at 20:24 on Jul 10, 2012 |
# ? Jul 10, 2012 20:11 |
|
KS posted:I think that's really outdated -- you had to turn it on back in 2.5. Nowadays even on an SD card it sets up a 1GB swap space on the card and can join fine, but god help you if it ever gets into a swapping situation. Have never run into that error from 4.0 till now. This is the kb, it says it still applies to 5.0. Though if it only wants 1 GB, I could easily dump that on the SAN super cheaply, and not have to worry about disks at all.
|
# ? Jul 10, 2012 20:20 |
|
Hypervisor swap is so completely uncommon since they implemented memory compression in 4.1 that seriously, gently caress yourself and your career if you let your environment get so oversubscribed that you're swapping.
|
# ? Jul 10, 2012 20:35 |
|
FISHMANPET posted:So here's a new diagram, not testing the configuration maximums: Before we go make more recommendations which license are you getting?
|
# ? Jul 10, 2012 20:59 |
|
Misogynist posted:Hypervisor swap is so completely uncommon since they implemented memory compression in 4.1 that seriously, gently caress yourself and your career if you let your environment get so oversubscribed that you're swapping. I'm well aware that it's a situation that I should strive to avoid, but ESXi requires it for HA (which we'll be using), and I can't find anything that tells what happens when that swap space disappears in an HA cluster. Corvettefisher posted:Before we go make more recommendations which license are you getting? We'll be getting Enterprise licenses. There's a possibility that we might get some Standard for a separate cluster, but that's for instruction, not infrastructure production. I've looked over the Enterprise plus and I don't think our enviroment will be big enough to justify the added cost for stuff like storage DRS and VDS.
|
# ? Jul 10, 2012 21:04 |
|
Misogynist posted:Hypervisor swap is so completely uncommon since they implemented memory compression in 4.1 that seriously, gently caress yourself and your career if you let your environment get so oversubscribed that you're swapping. The guy who handed it over to me was all "But memory allocation is at 175%; it's efficient!"
|
# ? Jul 10, 2012 21:26 |
|
FISHMANPET posted:I'm well aware that it's a situation that I should strive to avoid, but ESXi requires it for HA (which we'll be using), and I can't find anything that tells what happens when that swap space disappears in an HA cluster. Don't forget, Profile driven storage, Auto Deploy, SIOC/NIOC, and Host profiles. Some really good features, and 5.1 is just around the corner promising to bring some nice new stuff. But to your previous question, if you are going with hard drives get the smallest size use the integrated adapter, RAID 1 if you really want to make youself sleep better at night. A single drive wouldn't be terrible although it is a SPoF(so is a usb), you could simply vMotion things off rip replace and re install. I would weigh the cost of Disk servers to diskless, then take the difference and see if you can throw that sum to Enterprise plus, use auto deploy, and host profiles to get rip and replace hosts. VVV forgot he had bunch of 10gb Dilbert As FUCK fucked around with this message at 01:14 on Jul 11, 2012 |
# ? Jul 10, 2012 21:28 |
|
SIOC and NIOC become really important when running a converged network. You need to be able to throttle that vmotion traffic somehow or bad things happen. If you're considering plain-jane 10g NICs with this design and using iscsi, you're setting yourself up for some pain. You should strongly consider enterprise plus. The 4-host Enterprise Plus + VCenter bundle is like $25k + $6k/year support, for a ballpark figure. I will be ripping the SD card out of a diskless running host in an HA cluster tomorrow to see what happens, so stay tuned.
|
# ? Jul 10, 2012 22:25 |
|
http://professionalvmware.com/2012/07/vbrownbag-vcap5-dca-objective-1-2-1-3-herseyc/ BrownBag tonight of VCAP-DCA 5 objectives 1.2 and 1.3 tonight, if anyone is interested @Fish yeah go for enterprise plus, if you haven't bought the licenses shoot me an email @ corvettefish3r (at) gmail.com My firm sells licenses if you needed to compare anything. Dilbert As FUCK fucked around with this message at 21:06 on Jul 11, 2012 |
# ? Jul 11, 2012 19:14 |
|
Mausi posted:I know I'm a little late here, but PowerCLI has the get-stat command, if you do a little googling there's some great posts by LucD on using it.
|
# ? Jul 12, 2012 15:34 |
|
Misogynist posted:Get-Stat seems reaaaaally slow. Do you know if other API approaches are any quicker? I'm about to just start querying the vCenter database, which seems like it's an order of magnitude quicker. Yeah, it's drat slow but you're already pulling from the VC tables, just indirected through PowerCLI. You also have to consider that you end up with an average of averages which is dangerous to use for inappropriate measurements. When I use get-stat, I do precisely one call which gets everything I want, and then process the rest in hash tables from there on in. If you're making more than one call to it you're asking for pain. I'm not aware of faster methods beyond pulling the data into another tool which correlates as it goes and gives you a report whenever you push the button. VCOps is great, Netuitive does ok, haven't tried much else but I hear things like Graphite can do cool poo poo if you have the time to build something specific.
|
# ? Jul 12, 2012 23:28 |
|
Mausi posted:Yeah, it's drat slow but you're already pulling from the VC tables, just indirected through PowerCLI. You also have to consider that you end up with an average of averages which is dangerous to use for inappropriate measurements. I'm probably going to stick with the Perl API. It seems to be the least-bullshit way of getting raw access to the low-level perf query mechanism. Vulture Culture fucked around with this message at 02:29 on Jul 13, 2012 |
# ? Jul 12, 2012 23:45 |
|
Did the guy with the 6 core amd vmware esxi server ever post his build/specs/cost? I am probably going to do something similar soon.
|
# ? Jul 14, 2012 06:20 |
|
Christobevii3 posted:Did the guy with the 6 core amd vmware esxi server ever post his build/specs/cost? I am probably going to do something similar soon. Its the OP. He runs a desktop and through workstation sets up all his labs. I'll make a post tomorrow when I'm not on my phone of my home lab setup. It's a little mini itx intel build.
|
# ? Jul 14, 2012 07:48 |
|
Christobevii3 posted:Did the guy with the 6 core amd vmware esxi server ever post his build/specs/cost? I am probably going to do something similar soon. basically what I have I can do about 20 VM's before my system starts grinding slower, you'll still need an extra HDD or two, and power supply of course but that should give you the basic jist of it. I am looking into some whitebox builds for study on my VCAP-DCA My teacher posted a link to this http://kendrickcoleman.com/index.php?/Tech-Blog/vmware-vsphere-home-lab-qthe-green-machinesq.html on his site but I think it can be made cheaper Also SSD prices are falling rapidly
|
# ? Jul 14, 2012 17:56 |
|
In my opinion, you are often better off building two boxes. One lower powered one jsut for storage, and one for VMware. You can buy a zacate motherboard, case and power supply for pretty drat cheap, and then you don't have to worry about iommu or a VMware supported storage controller.
|
# ? Jul 14, 2012 21:35 |
|
adorai posted:In my opinion, you are often better off building two boxes. One lower powered one jsut for storage, and one for VMware. You can buy a zacate motherboard, case and power supply for pretty drat cheap, and then you don't have to worry about iommu or a VMware supported storage controller. Yeah 2 physical boxes are great as the physical network/disk limitations give you a nice gauge of performance when doing things, but workstation + esxi VM's is much simpler and cheaper
|
# ? Jul 14, 2012 22:17 |
|
I passed through my LSI controller to a VM and use that as a storage server, personally. Note that this is a home lab thing. If you had to learn anything that looks like a larger setup, a decent modern desktop could have upwards of 16GB to 32GB of memory and at least 8 independent threads. You can virtualize some ESXi hosts with vCenter in a Hosted solution like Fusion, Workstation, etc. Plus I tend to have this fallback thing where I turn old gaming desktops into servers, so every desktop gets maxed out with RAM in case it has to turn into another ESXi node, before RAM prices climb due to rarity. With that said, however, I still want to consolidate everything into one larger box at some point and donate/sell the old hardware.
|
# ? Jul 15, 2012 17:38 |
|
I went ahead and pulled the trigger on the box I posted back many pages ago: SUPERMICRO MBD-X9SCM-F-O $199 Intel Xeon E3-1230 V2 $233 Antec NEO ECO 400C 400W $49.99 Mushkin Enhanced Prospector 4GB USB $6 LIAN LI PC-V351B $109 Super Talent DDR3-1333 8GB ECC x2 $132 so all for around ~730. Right now I am using my qnap as a datastore (and fileserver). In the future I will transfer the fileserver role to a VM running freenas or something using a LSI controller being passed through. Next pay check I will also get another 16gb of RAM just because. Pretty happy with it especially for the price and the features that I have.
|
# ? Jul 15, 2012 19:12 |
|
Edit: Never mind, Powershell was being a pain in the rear end
Vulture Culture fucked around with this message at 05:07 on Jul 16, 2012 |
# ? Jul 16, 2012 04:41 |
|
If I was going to keep running in virtual box is there a difference between the amd fx8150 and the intel 2600k really? I need to upgrade my desktop and have 16GB of ddr3 and would rather not build another machine now. I have a 120GB ssd for os/480GB for vms/300GB raptor/4x1TB on 3ware 9650/400GB hd. Will this suffice to 2 2008 DC, 4 windows 7 clients for labbing?
|
# ? Jul 16, 2012 09:31 |
|
Just the big SSD would suffice for that, if it's any good.
|
# ? Jul 16, 2012 09:39 |
|
Sandforce 2 mushkin. Yeah I need to upgrade my cpu/mobo anyways since my motherboard is going out. The 400GB drive is choking at the moment.
|
# ? Jul 16, 2012 09:55 |
|
Here's what I have running at work: The parts were all bought about 2 years ago (not by me, and I put them all together when I started (about 6 months after that) Server 1 - A bunch of different versions of IE, running on Windows 7/XP. We have two instances each of XP+IE6, XP+IE7, XP+IE8, W7+IE8, and W7+IE9. There's also a clone of our company website running Joomla on Linux, our bug tracker (Redmine), and m0n0wall. Lastly we have Windows 2003 Web Server Edition running, which basically has our development MySQL database (all our other servers are on Linux this makes no sense) Server 2 - Mirror of our production MySQL database, and then a 'test' database where we dump copies of the live DB to. Then we have a quasi-mirror of our production webserver. Basically so we can run tests with real data. These all run Linux. The servers were originally 12GB/8GB, but the database cloning runs so much faster with 24GB so now they are 24GB/12GB. They're in 1U cases. Loud as all hell. We have the room, I would have used desktop cases if I did it again. Also we can only fit 2 HD's in those things. The other thing I'm not real sure about is the LGA1366 CPU's and X58 boards. On one hand, that was the only way to get the fastest CPU's we could afford, but the boards are another $150 more than a regular board. Plus, they had 6 RAM slots instead of just 4. But there's no CPU upgrade path. Also, I'd have gone with multiple 128GB or 256GB SSD's instead of the 1TB drives, which performance basically dies on if you get two VM's hitting the same disk for any amount of time. It's not SO bad in our use case, but we really pack the Windows VM's on there. Then again, a 256GB SSD at the time cost something like $800, and those 1TB drives were $60 (before the floods). Originally there were ASUS motherboards, but they had Realtek NICs that didn't work with ESX. They also didn't have integrated video, so the only expansion slot I could use (1U's remember) was taken by a graphics card. Those Supermicro boards are great because they have dual Intel NICs.
|
# ? Jul 16, 2012 13:37 |
|
Christobevii3 posted:If I was going to keep running in virtual box is there a difference between the amd fx8150 and the intel 2600k really? I need to upgrade my desktop and have 16GB of ddr3 and would rather not build another machine now. The FX only has 4 FPU's so 1/2 of those 8 cores aren't being fully utilized. I would go with an x6 or a 2600k. I can run that easy using under 10GB assuming you aren't giving DC's/Clients 4GB ram, run them at 85% mem usage, if they have to swap to an ssd oh well, it isn't that bad for a lab environment.
|
# ? Jul 16, 2012 14:00 |
|
+ 1 per server 100PT http://www.ebay.com/itm/IBM-PRO-100...#ht_2543wt_1163 So this is going to be my VCAP-DCA lab and development/test lab NAS will be my Desktop running VM's, I could push my luck and go with some drobo box(es)... The HP microserver might do wonder if it is drive locked http://www.newegg.com/Product/Produ...#scrollFullInfo Suggestions? Ahh poo poo I thought I hit edit Dilbert As FUCK fucked around with this message at 23:06 on Jul 16, 2012 |
# ? Jul 16, 2012 22:52 |
|
|
# ? May 11, 2024 16:24 |
|
Corvettefisher posted:So this is going to be my VCAP-DCA lab and development/test lab Those 1U's are supah dupah loud but you probably already know that.
|
# ? Jul 17, 2012 13:17 |