|
long-rear end nips Diane posted:I have a NUC that I run 6-ish VMs on at a time but it can be hard to keep it in the $600 price range once it's all kitted out. Same; its my "off-cluster" lab box. That said, be sure it's a model with the drivers that match your OS. I know the 5i5MYHE has Server 2012 R2 drivers; some models do not.
|
# ? Jan 24, 2017 19:13 |
|
|
# ? Jun 5, 2024 08:05 |
|
I've used a couple NUCs in the past, I didn't think they would be powerful enough to run multiple VMs. I'll take a look at those, thanks!
|
# ? Jan 24, 2017 19:27 |
|
bigger thicker loads posted:I've used a couple NUCs in the past, I didn't think they would be powerful enough to run multiple VMs. I'll take a look at those, thanks! An SSD makes the biggest difference, followed closely by maxing out RAM. CPU is rarely a limiting factor on VMs these days (depending on your workload of course). That said, beware that unless you go USB, youre stuck with a single NIC
|
# ? Jan 24, 2017 19:36 |
|
The new 6th gen NUCs have the benefit of being supported out of the box by vanilla ESXi without having to build your own custom image to get nics working. You can do 32GB of memory, and M.2 SSD and a 2.5" drive, so you can get a decent amount of storage and memory in there, and they are dead quiet and put out very minimal heat.
|
# ? Jan 24, 2017 20:02 |
|
big money big clit posted:The new 6th gen NUCs have the benefit of being supported out of the box by vanilla ESXi without having to build your own custom image to get nics working. Now I want to upgrade my perfectly fine 5th gen NUC for literally no real reason.
|
# ? Jan 24, 2017 22:34 |
|
bigger thicker loads posted:Thanks for the replies. I've been looking as bit more, and this caught my eye: I use the z400 line as FreeNAS servers. They're essentially silent and I haven't noticed anodization power draw but I also don't have a Kill-a-watt to check.
|
# ? Jan 24, 2017 22:35 |
|
Walked posted:That said, beware that unless you go USB, youre stuck with a single NIC I would say more like, "be aware," and I wouldn't call that a huge deal. A second NIC seems like it would be handy for some use cases for a home lab, but it's not like you can't work around it with software networking. For instance, having direct access to the internal VM network from a client can easily be accomplished by just tunneling in. pfsense makes all this stuff pretty easy even if you don't know a lot about networking. ErIog fucked around with this message at 01:42 on Jan 25, 2017 |
# ? Jan 25, 2017 01:28 |
|
ErIog posted:I would say more like, "be aware," and I wouldn't call that a huge deal. A second NIC seems like it would be handy for some use cases, but it's not like you can't work around it with software networking. Oh I agree; I wasnt trying to get at the negative connotation of "beware" but rather just making sure they're aware. It's never been an issue; sometimes I wish I had a second NIC for SAN traffic, but thats a pretty edge-case scenario I think.
|
# ? Jan 25, 2017 01:30 |
|
The big appeal of a second NIC for VMWare stuff is if you're trying to get onto or off of a DVswitch it can be nice to have. Can always work around it with nested virt though.
|
# ? Jan 25, 2017 04:42 |
|
Thanks for the advice about using an NUC as an ESXi host. With my tax return coming soon, I've got a little more breathing room in my budget, so I'm deciding between these two systems: i5 Skylake, 4 threads, up to 2.9GHz https://www.amazon.com/Intel-NUC-Kit-NUC6i5SYH-Mini/dp/B018Q0GN60/ or i7 Skull Canyon, 8 threads, up to 3.5GHz https://www.amazon.com/Intel-NUC-Kit-NUC6i7KYK-Mini/dp/B01DJ9XS52/ For a $200 difference, is it worh it for double the threads, higher max speed, and an upgrade from a more mobile-type CPU to one closer to a desktop? The i7 box uses more power, but it's still only 45 watts. It looks like the i7 box needs to have the Thunderbolt chip disabled in BIOS during the initial installation of ESXi, but it can be re-enabled afterwards.
|
# ? Feb 5, 2017 20:55 |
|
bigger thicker loads posted:Thanks for the advice about using an NUC as an ESXi host. With my tax return coming soon, I've got a little more breathing room in my budget, so I'm deciding between these two systems: It's going to depend on how many VMs you want to run concurrently and how bogged down they'll get if the machine is overprovisioned, but I would probably not buy a dual core CPU for a VM box, I'd lean towards the latter. In my case I run 12+ VMs on a dual 6 core xeon machine that cost about $600 to put together (with some ssd storage). It's 2011 hardware and uses around 200 Watts when the CPUs are under heavy load but it fits my needs.
|
# ? Feb 5, 2017 21:38 |
|
You're very likely going to run out of memory before CPU on a NUC in a virtual lab scenario. Unless you're doing fairly compute intensive tasks CPU is not likely to be the bottleneck and I'd save the money and do the i5.
|
# ? Feb 7, 2017 09:23 |
|
I'm just about to upgrade the RAM in my little Synology NAS (which I now use for a ton of self-hosted services via Docker, in addition to file sharing and VPN). On the plus side, it has a removable DDR3L SODIMM memory module. On the minus side, it's buried deep inside: https://forum.synology.com/enu/viewtopic.php?t=91905#p354295 What the hell Synology? Would easy access to the RAM have killed the design engineers?
|
# ? Feb 7, 2017 11:26 |
|
Your DS415+ is probably going to stop working shortly anyway: https://www.theregister.co.uk/2017/02/06/cisco_intel_decline_to_link_product_warning_to_faulty_chip/ Make sure you have backups / a warranty / escape plan.
|
# ? Feb 7, 2017 11:49 |
|
Thanks Ants posted:Your DS415+ is probably going to stop working shortly anyway: Crap.
|
# ? Feb 7, 2017 12:03 |
|
Thanks Ants posted:Your DS415+ is probably going to stop working shortly anyway: Well, at least it's an excuse to divest ourselves of some of these crappy Fisher-Price NASes.
|
# ? Feb 7, 2017 14:07 |
|
There are some pretty serious performance issues with the native ACHI driver in ESXi 6.5, so if you're using sata drives in your home lab you may want to disable that driver to revert to the legacy one if you get poor performance. This affects anything that uses the SATA bus, so M.2 SSD as well. esxcli system module set --enabled=false --module=vmw_ahci
|
# ? Feb 8, 2017 03:08 |
|
Well crap. The two RS2416RP+ units I got at work are going down.
|
# ? Feb 8, 2017 03:11 |
|
big money big clit posted:There are some pretty serious performance issues with the native ACHI driver in ESXi 6.5, so if you're using sata drives in your home lab you may want to disable that driver to revert to the legacy one if you get poor performance. This affects anything that uses the SATA bus, so M.2 SSD as well. That's interesting! I have a work setup that I am testing hardware on VMs and the datastore is on a SATA SSD. It seems to lose connection a lot (and ran godawful slow) until I moved to an NVMe drive. Do you know if losing connection to datastore is a symptom of this issue?
|
# ? Feb 8, 2017 05:19 |
|
Is there a particular reason that this thread and the other one (VM thread) are so focused on VMware/ESXi as opposed to KVM? Total newbie asking.
|
# ? Feb 16, 2017 22:49 |
|
Labs are for breaking/testing things you might want to learn without wrecking poo poo at work or where it matters. KVM has less hold on the workplace in general. But if you have questions, I run KVM everywhere, so feel free
|
# ? Feb 16, 2017 23:06 |
|
Ok well I've never set up any kind of hypervisor before and my only VM experience is Fusion running on a Mac for work-specific apps. However I am intrigued by the possibility of setting up a server that could to run a few Mac OS and linux VMs to tinker with and ideally access them via cheap hardware such as a Chromebook. I am particularly interested in getting away from being tied to specific hardware ecosystems like Apple but would still like to be able to use Mac OS and Mac software. Would there be particular advantages/disadvantages to ESXi vs KVM vs other for very entry-level learning/tinkering purposes?
|
# ? Feb 16, 2017 23:30 |
|
VMware (in general) has a lot more resources on Google if you wanna ask a quick question, and the user interface is somewhat more friendly for doing odd stuff (which you'll need to do to virtualize OSX) The hardware support can be finicky compared to KVM, though, which will pretty much run on any crapbox which has hardware virtualization support
|
# ? Feb 17, 2017 00:08 |
|
This may be relevant for some of ya'll (I know it's been pissing me off for ages now): after almost 2 years of bullshit back-and-forth between Intel and Microsoft a version of Advanced Network Services (ANS) has finally been released for Windows 10 which supports VLANs and teaming (Functionality that was inexplicably removed when Windows 8.1 was released). You can grab the new version here, using it myself with an I350-T4 and it works perfectly: https://downloadcenter.intel.com/download/25016/Intel-Network-Adapter-Driver-for-Windows-10
|
# ? Mar 1, 2017 18:44 |
|
Smashing Link posted:Is there a particular reason that this thread and the other one (VM thread) are so focused on VMware/ESXi as opposed to KVM? Total newbie asking. ESXi is incredibly easy to get up and running and has a pretty okay user interface for interacting with stuff. KVM can have that as well, but the experience I've had with it in a production setting has been that you can get stuck on weird small issues. I use KVM in production and ESXi for protototyping. It's trivial to create/destroy/snapshot in the UI. It's easy to back up and trivial to back up the config of the system itself. It runs from an SD card or USB stick so dual booting it is very easy. If I got used to KVM then I bet a lot of these things would be true there too, but ESXi just removes a lot of the hassle.
|
# ? Mar 11, 2017 12:49 |
|
KVM (through libvirt) can trivially create/destroy/clone and export/import configuration through virsh, virt-manager, kimchi, or whatever. I does not do things like HA. At all. Because KVM is essentially a driver, and libvirt sits on top to say "here's how you access storage/etc". To make it do things like HA, you can either set up obnoxious resources in pacemaker, or use an actual product backed by KVM (oVirt, proxmox, etc) KVM is comparable to vmkernel, not vSphere.
|
# ? Mar 11, 2017 16:29 |
|
That's true, and why I was trying to caveat that my only experience with KVM has been in production (where we only use virsh for management). You are correct. I should have specified that I find virsh clunky rather than KVM itself.
|
# ? Mar 12, 2017 01:21 |
|
Do you guys run your labs on your home network or have them segregated? If you have them on the home network, how do you handle DHCP and DNS? I want to do away with my lab network and have everything on the home network. I want devices on the home network to continue as they are, pulling DHCP from the router, resolving DNS in the usual way, etc. In the lab, though, I'd like to continue having that DNS and DHCP. I'm not seeing my angle here.
|
# ? Apr 21, 2017 20:53 |
|
MC Fruit Stripe posted:Do you guys run your labs on your home network or have them segregated? If you have them on the home network, how do you handle DHCP and DNS? Well with DNS you will have to configure the net adapters to point to the DNS servers you want. DHCP I was going to say group policy (in fact did before I edited) but derp they need an address first.
|
# ? Apr 21, 2017 21:36 |
|
Yeah DNS is easy if I set it up manually, but if addresses are handed out via DHCP then we have a problem, because I want some devices receiving DNS1 and some receiving DNS2. Working through some issues on the home network which are being caused by having two networks. I think I've got it narrowed down to two solutions, I can either 1) completely segregate the two networks by not configuring a default gateway on the lab NICs, or 2) run my home and lab off the same network while maintaining separate DHCP and DNS by ____. It's what goes in ____ that has me thrown. I mean it may not even be possible with the equipment that I have (no VLAN capability) but I'm at least trying to look at options.
|
# ? Apr 21, 2017 22:19 |
|
Is there a way to run a DHCP server that only answers to specific MAC addresses? So your lab stuff all gets an address from that, everything else gets ignored by your lab DHCP and picks it up from your home router. Edit: This only works if the lab DHCP can answer quicker than the home router, or there's no guarantee of keeping the lab stuff in the lab. Thanks Ants fucked around with this message at 22:31 on Apr 21, 2017 |
# ? Apr 21, 2017 22:29 |
|
How fancy of a DHCP server do you have? Can you specify the settings returned to specific MAC addresses other than IP?
|
# ? Apr 21, 2017 22:31 |
|
I run it all on one network. I just have all devices pointed at the same DNS and DHCP, but not everything is domain joined. I'm sure there's a valid reason not to do what I did, but I found it much easier to handle than trying to separate them.
|
# ? Apr 21, 2017 22:38 |
|
MC Fruit Stripe posted:Yeah DNS is easy if I set it up manually, but if addresses are handed out via DHCP then we have a problem, because I want some devices receiving DNS1 and some receiving DNS2. I think you'd benefit greatly by getting some better networking hardware. With Cisco the DHCP Relay Agent can be configured per-interface which allows you to use specific DHCP servers per network segment: https://www.cisco.com/en/US/docs/ios/12_4t/ip_addr/configuration/guide/htdhcpre.html#wp1085232. IMO the Cisco Catalyst CX series switches are great for home labs as they're small, quite (Passive cooling, no fans) and run full IOS (Layer 2 LAN Base on the 2960-CX and Layer 2+3 IP Base on the 3560-CX). However they are a bit pricey (Around $600 for a 3560-CX) and unless you're already familiar with IOS the learning curve might be a bit steep. That aside, I've got a 2960-CX in my home lab and I'm extremely happy with it. Or alternatively have a look at the EdgeRouter series from Ubiquiti. They're solid devices with full Layer 2+3 support and are extremely cheap (The EdgeRouter X is around $60).
|
# ? Apr 22, 2017 06:32 |
|
I love your advice, let me read up on the links provided and see what looks appropriate for my setup. Thanks all!
|
# ? Apr 22, 2017 13:01 |
|
Cross-post, asking for a friend, anyone run into the same issue?cheese-cube posted:Anyone here managed to get ASAv running on ESXi 6.5 in a Workstation VM? A colleague of mine is having issues, "Failed to deploy VM: postNFCData failed." error. Edit: nevermind, psydude has answered my question: psydude posted:Pretty much all Cisco products aren't officially supported on 6.5 yet, and I've heard of all sorts of issues with it more generally. Pile Of Garbage fucked around with this message at 06:23 on May 4, 2017 |
# ? May 2, 2017 17:19 |
|
Crossposting from the cert thread: I bought a lab (http://www.ebay.com/itm/250930267864) with 3 routers/2 switches for studying for my ICND1/2. Did I get a good one? I've been using pearson/cisco press/PT but from what I read having actual hardware should make the studying process easier. It has open slots so if anyone has any recommendation on an extra switch or better router or whatever, please enlighten me.
|
# ? Jun 19, 2017 17:04 |
|
Contrary to the (now ancient) OPs you can buy used Dell servers stupid cheap on ebay these days. If I have a rack sitting in the back room, is there any reason I should spend money on a c6100 or r710 just to gently caress around with? Seems like you can get a decent amount of cores plus a sizable amount of memory for <$300
|
# ? Jun 22, 2017 04:23 |
|
Yea ebay servers are the reason I have like 7 servers. Its a real problem I swear!
|
# ? Jun 22, 2017 04:30 |
|
|
# ? Jun 5, 2024 08:05 |
|
Nativity In Black posted:Contrary to the (now ancient) OPs you can buy used Dell servers stupid cheap on ebay these days. If I have a rack sitting in the back room, is there any reason I should spend money on a c6100 or r710 just to gently caress around with? Seems like you can get a decent amount of cores plus a sizable amount of memory for <$300 eBay servers were always cheap. They're also loud power hogs. Have you heard a C6100? If you can live with that, great, get one.
|
# ? Jun 22, 2017 12:50 |