|
NippleFloss posted:So, like I said, your issue is that your port groups don't match. Your ESXi host management network is on a port group on you vSwitch that is sending traffic untagged. The Port group that your pfsense router lives on is expecting tagged traffic. These two port groups need to have matching VLAN settings because the interfaces connected to them are on the same network and connected to the same switchport. If one is expecting to send/receive untagged traffic (the management port group) and the other is expecting to send/receive tagged traffic (the pfsense LAN interface) then one of them is going to be unhappy irrespective of the switch config. The quickest fix would be to remove the VLAN ID from the LAN port group. It's less clean than explicitly tagging all port group traffic, but it will get things correct quicker. Now I understand, I think--thanks for completely spelling out the route a frame would take if I had things correct. In fact I'm posting from my desktop with the router-on-a-stick config on my network, so hooray, with that one change--of taking the VLAN tag off of my LAN portgroup--it worked exactly as you said it would. I feel slightly less stupid now! I guess if I understand you right, in my previous configuration the point of failure was that since the vSwitch LAN portgroup was tagged as vlan 5, it was only allowing frames with tag 5 on them. But since the vlan 5 tags were getting stripped on the way out of port 3 on my switch--per my own configuration --ESXi wasn't letting them in to that port group and the pfsense VM wasn't seeing them. So now that everything's actually working, besides wireless (I'm gambling I can put the RT-AC66U router on the netgear switch and disable the router and switch parts, leaving only the wireless AP): you said it was less clean. What's the cleaner solution, then: change the netgear switch to tag everything going into port 3 (be it vlan 4 (WAN) or 5 (LAN)), put the ESXi management portgroup on 5 (or 4095, which seems to be ESXi's "gently caress you i'ma watch em all" wildcard, just so I can use any port on the switch if things go REALLY fubar) and the LAN portgroup on 5? Any particular reason you feel this is cleaner? Ciaphas fucked around with this message at 03:56 on Aug 18, 2016 |
# ? Aug 18, 2016 03:54 |
|
|
# ? May 16, 2024 19:08 |
|
Welp, tried to squeeze in the CCNA v2 before it became unavailable and bollocksed it up. Did well on switching, dynamic routing, access list stuff and hosed up on the netflow commands and an IP addressing question. Oh well, will take a break and see what the v3 syllabus is like.
|
# ? Aug 19, 2016 12:43 |
You've got another couple of weeks to take v2 again, unless you were taking the combined exam.
|
|
# ? Aug 19, 2016 13:12 |
|
Yeah it was the combined
|
# ? Aug 19, 2016 14:00 |
|
Looking for advice and my best approach on SAN software for a nested lab. Please read my specific scenario. My home lab consists of 3 normal desktop PCs which all serve other purposes. All I did was take 3 computers that receive daily use (one downstairs, one upstairs, and the media PC) and slap a ton of memory (64, 64 and 32) and an extra HD in them. Essentially, the unused cycles on those computers make up my home lab. No external SAN, just 3 computers. On memory alone I can run a pretty massive lab, and the HDDs, well, maybe I wouldn't want to encode video on 3 VMs at once, but I get by. Then I run VMware workstation on each, with Openfiler and ESXi hosts installed on each, with those all added to a vCenter server. So we have PC1, and on that I'll have PC1SAN, PC1ESX1 PC1ESX2 and PC1ESX3, which in vCenter all get added to cluster PC1. Then I can vMotion between the different PCs or SANs. So that's the setup - I want to be clear about that because it means I don't have a tremendous amount of horsepower. No whitebox SAN here, just some Openfiler installs in Workstation. I've used iSCSI via Openfiler forever, but I'm starting to wonder if I should migrate to something else. I just bought VMUG Advantage/EvalExperience, so I'll have an install of VSAN for the home lab. Should I stick with Openfiler? Move to VSAN? FreeNAS? Something else? The only caveat is that it has to run decently on my existing setup. Yeah I don't have a ton of IOPS running it like this, but I've yet to figure out how to be more than one person doing more than one thing at a time, so the slowness has never been much of a factor. That said, I don't have a lot of IOPS to spare, so I can't install something that's going to need a legit 4 vCPU and 8GB of memory on each box. Low footprint, good performance, what's my best bet here?
|
# ? Oct 15, 2016 15:36 |
|
I might have missed previous posts on this, but what is your motivation for your home lab? Having paid for the VMware evaluation licensing I'd be pretty keen to get some exposure to the elements that make up the VMware stack, which would drive me towards deploying VSAN even if the performance in my home lab wasn't going to match that of whatever I was currently running.
|
# ? Oct 15, 2016 15:47 |
|
Turtles all the way down. No reason to switch if it works. vSAN and gluster will both choke on one disk without much memory. Ceph is a no-go with that use case.
|
# ? Oct 15, 2016 15:48 |
|
The specific use case, in this scenario, is definitely VMware stack, so that's a big plus to VSAN. The SANs and ESXi hosts exist solely to give me a vSphere environment to play with at home. Any VM which might need a little more horsepower I'd just build as a traditional VM in Workstation. Combining both of your replies then, it sounds like my best plan would be to stick with Openfiler on 1 or 2 of the SANs, then run VSAN on the other 1 or 2, knowing that performance is going to go down a bit but probably still livable. And there's no third option I really need to concern myself with, just run some combination of Openfiler and VSAN. e: VSAN installed with the knowledge that I may need to pick up at least a 2nd hard drive for a box that's running it. MC Fruit Stripe fucked around with this message at 17:01 on Oct 15, 2016 |
# ? Oct 15, 2016 16:48 |
|
VSAN will require at least three hosts. Each host will need a minimum of 1 free ssd and one free hdd. And each host needs to be running full ESXi, obviously, so you'd need to rebuild your media PC as a VM on the cluster. It's also not a requirement, but dual NICs are a good idea.
|
# ? Oct 15, 2016 18:38 |
|
I should be able to build it in a nested environment though, no? 3 ESXi VMs on the same PC, each with an additional, empty drive for VSAN to use. Slow or not, without having even Googled it yet, someone has installed VSAN inside a workstation environment, surely.
|
# ? Oct 15, 2016 21:06 |
|
MC Fruit Stripe posted:I should be able to build it in a nested environment though, no? 3 ESXi VMs on the same PC, each with an additional, empty drive for VSAN to use. Slow or not, without having even Googled it yet, someone has installed VSAN inside a workstation environment, surely. Sure, you can nest it, but it's going to run like poo poo so you'll never run VMs on it so all you'll really be testing is setting it up, which you could just do just as well with VMware hands on labs. VSAN setup is like a 10 minute task. It's really not worth going through the trouble of doing it at home if you aren't actually going to use it.
|
# ? Oct 15, 2016 21:17 |
|
Wow. Had another node's internal USB key die. Same story as before, Nutanix was just twiddling it's thumbs apparently not monitoring the actual boot device at all. I only really noticed that during the migration to ESX I suddenly couldn't boot VMs. Came with a super helpful error message as well. To it's credit, once I identified the server and powered it down the cluster rebuilt and I could boot things again, but this doesn't feel well baked at all. Maybe if they gave community edition members the ability to use other hypervisors and access to the knowledge base it'd be better. But seriously, how does constant read errors on a system disk not trigger a node health warning? It seems like their monitoring only makes sure the CVM is healthy and not the actual hypervisor underneath. So, anyone have a preferred USB boot drive? Going SATADOM feels like overkill, but since it preserves my hotswap disk capacity I might just pull the trigger. H2SO4 fucked around with this message at 21:58 on Oct 15, 2016 |
# ? Oct 15, 2016 21:52 |
|
If it's just to boot from, SanDisk have a range of Industrial SD cards that are designed to be written to a ton.
|
# ? Oct 16, 2016 13:34 |
|
Thanks Ants posted:If it's just to boot from, SanDisk have a range of Industrial SD cards that are designed to be written to a ton. I just bit the bullet and got some cheap MLC SSDs to boot from. It'll be a while until I really need more datastore capacity. When I hit that limit I think I'll go the industrial SD card route.
|
# ? Oct 16, 2016 16:44 |
|
big money big clit posted:Sure, you can nest it, but it's going to run like poo poo so you'll never run VMs on it so all you'll really be testing is setting it up, which you could just do just as well with VMware hands on labs. VSAN setup is like a 10 minute task. It's really not worth going through the trouble of doing it at home if you aren't actually going to use it.
|
# ? Oct 16, 2016 21:21 |
|
Finally got the four nodes converted to ESX and vSAN. In case anyone's wondering, vSAN and HA do a great job recovering from when someone fatfingers a vmkernel address and inadvertently duplicates an existing address of another host. Not that anyone's stupid enough to do that.
|
# ? Oct 25, 2016 02:06 |
|
I may have asked this before: is there really a lot to learn for the MCSE or is it just that they make the test a bunch of bullshit to gently caress with you? What's a reasonable amount of time to allot for studying for the MCSA and MCSE Server Infrastructure? What are the best resources?
|
# ? Nov 14, 2016 08:33 |
|
Thanks Ants posted:Welp, tried to squeeze in the CCNA v2 before it became unavailable and bollocksed it up. Did well on switching, dynamic routing, access list stuff and hosed up on the netflow commands and an IP addressing question. If you're getting your CCNA for the first time and not recertifying, go for the two exam route. Its alot easier as it splits out the syllabus so you can focus on one half of it at a time.
|
# ? Nov 16, 2016 12:26 |
|
OK so this is actually for a thing at work and not a home lab but I'd figure I ask here anyway. Disclaimer: I'm a far cry from a network engineer so none of the following might make any sense. We have a setup kind of like this nice picture I just drew in ms paint: The two "bootp client" machines are supposed to load an image when booting from the VM that runs on the "jump host" machine. The VM is running in virtualbox, and I have defined two interfaces, eth0 that is behind a NAT with a dynamic IP (to give the VM internet access) and eth1 with static IP which is the dhcp/bootp server that the clients are supposed to talk to. The bootp client machines are otherwise supposed to be behind a vlan. The jump host is a laptop with a single physical NIC. According to the Documentation the PXE boot should be on the native vlan, so my first attempts at getting this to work was just to bridge eth0 on the host with eth1 on the VM. This doesn't seem to work at all and when I run tcpdump on the interface I can see DHCP requests but no response is sent. If I instead try to bridge eth1 with a vlan interface on the host, things get a bit better but I noticed that no dhcp requests are received during the PXE boot, but when the client has actually booted into Linux (preinstalled, not via pxe boot), they manage to get an IP from the dhcp server on the VM. But I'm guessing when using the vlan untagged packets are dropped so this will never work properly. Anyway, is this possible to do when the laptop only has one NIC? I'm obviously doing something wrong but I've run out of ideas since I don't really know a whole lot about networking.
|
# ? Nov 16, 2016 19:14 |
|
VLANs shouldn't be visible to the end devices assuming they have virtual NICs on and removing the VLAN tag is done by the hypervisor. Have you tried putting the adapter in Virtualbox into promiscuous mode?
|
# ? Nov 16, 2016 19:29 |
|
Thanks Ants posted:VLANs shouldn't be visible to the end devices assuming they have virtual NICs on and removing the VLAN tag is done by the hypervisor. Yeah I did, but it made no difference from what I could tell. I also tried adding the relevant ports to the iptables file of the host, but I don't know if that makes any difference on a bridged network. What's weird to me is that the host picks up DHCP requests during PXE boot, but not the VM that only starts to get the requests when it the client has booted into Linux. They're on the same subnet so a broadcast should be picked up by everyone at all times??
|
# ? Nov 16, 2016 20:13 |
|
Are you actually using VLANs?
|
# ? Nov 16, 2016 20:48 |
|
So, I have a dumb problem I've been banging my head against all day. I have an Optiplex 960 sitting idle here at work. I want to ESXi it up and start labbing on it. I grab a spare USB stick. Install to that USB stick goes well. First boot from that USB stick goes well. From the second boot on, though, it tells me bank5 and bank6 are hosed up and there's no hypervisor. It seems like there's some caching of configs it's trying to do after its first boot that's loving up and pointing it at the wrong boot information. I assume this is because I'm running from a USB stick and it's getting confused, but I'm not sure. Anybody else experienced this before? Know a fix? I really don't want to have to dig around and put in another hard drive. edit-v2: Turns out Windows 10 may have been attempting to automount and gently caress around with the boot banks. ErIog fucked around with this message at 04:33 on Nov 18, 2016 |
# ? Nov 17, 2016 08:31 |
|
evol262 posted:Are you actually using VLANs? I have defined a vlan interface on the linux host that the guest bridges with. The other machines are not on the vlan despite getting an IP so they will send untagged packets (as I understand) which is why I don't think it will work. I'm not sure if I can configure them to be on a vlan at boot time which I -think- I would need to do if I wanted PXE boot to work this way... Edit: well I managed to get everything to work, I just had to set the switch port as untagged on the vlan where the pxe boot is happening. netcat fucked around with this message at 15:46 on Nov 17, 2016 |
# ? Nov 17, 2016 09:07 |
Any suggestion for a NUC or NUC sized device with two NICs that I can put ESXI on? I want to spin up a palo alto VM and use it in my home network, but I'm not going to go the full 1U server or anything. A NUC would be great, but I know things like NICs can be touchy with ESXI and wondered if anyone here had set up something similar.
|
|
# ? Nov 30, 2016 00:56 |
|
rafikki posted:Any suggestion for a NUC or NUC sized device with two NICs that I can put ESXI on? I want to spin up a palo alto VM and use it in my home network, but I'm not going to go the full 1U server or anything. A NUC would be great, but I know things like NICs can be touchy with ESXI and wondered if anyone here had set up something similar. I have read of people using a USB 3.0 gigabit NIC with ESXi for a while, never done it myself though (I have loaded drivers this way though). http://www.virtuallyghetto.com/2016/11/usb-3-0-ethernet-adapter-nic-driver-for-esxi-6-5.html
|
# ? Nov 30, 2016 02:56 |
|
rafikki posted:Any suggestion for a NUC or NUC sized device with two NICs that I can put ESXI on? I want to spin up a palo alto VM and use it in my home network, but I'm not going to go the full 1U server or anything. A NUC would be great, but I know things like NICs can be touchy with ESXI and wondered if anyone here had set up something similar. Do you need dual NICs for any particular reason? You can use a single NIC with ESXi, it only really limits you during certain migration scenarios.
|
# ? Nov 30, 2016 03:01 |
big money big clit posted:Do you need dual NICs for any particular reason? You can use a single NIC with ESXi, it only really limits you during certain migration scenarios. WAN and LAN ports. Moey posted:I have read of people using a USB 3.0 gigabit NIC with ESXi for a while, never done it myself though (I have loaded drivers this way though). Interesting, didn't think of a USB NIC. I'd still like to hear if anyone has any other suggestion itt. rafikki fucked around with this message at 03:29 on Nov 30, 2016 |
|
# ? Nov 30, 2016 03:27 |
|
I use a generic usb 3 ethernet adapter with my NUC/ESXi setup and it's worked fine, but I haven't tried pushing all of my network traffic through it. YMMV, of course. http://www.devtty.uk/homelab/USB-Ethernet-driver-for-ESXi-6.5/, http://www.virten.net/2016/06/additional-usb-nic-for-intel-nucs/ long-ass nips Diane fucked around with this message at 03:36 on Nov 30, 2016 |
# ? Nov 30, 2016 03:30 |
|
rafikki posted:WAN and LAN ports. Get a VLAN capable switch and do a router on a stick. I've looked before and there aren't really any offerings like the NUC with two network interfaces. Closest thing I've seen are some bare bones SFF pcs from Shuttle and the like.
|
# ? Nov 30, 2016 03:34 |
big money big clit posted:Get a VLAN capable switch and do a router on a stick. I've looked before and there aren't really any offerings like the NUC with two network interfaces. Closest thing I've seen are some bare bones SFF pcs from Shuttle and the like. I thought about it, but I was hoping to just do the firewall for now and worry about a managed switch later. There are definitely dual NIC NUCs out there, like http://a.co/45JBbPv or some Logic Supply ones, just curious if anyone has setup something themselves. If nothing else, I know I could just get a micro-atx case and build it out with a second NIC in there.
|
|
# ? Nov 30, 2016 03:57 |
|
rafikki posted:I thought about it, but I was hoping to just do the firewall for now and worry about a managed switch later. That and the logic supply option both have very low limits on memory which makes them fairly useless as virtualization hosts. Best option is going to be to build one out yourself, but it'll end up being more expensive, bigger, and probably louder.
|
# ? Nov 30, 2016 09:17 |
|
Does it have to be a virtual machine? There are plenty of tiny x86 boxes out there for this sort of thing https://pcengines.ch/apu2.htm is the first that comes to mind.
|
# ? Nov 30, 2016 17:19 |
|
rafikki posted:WAN and LAN ports. if you want a real nic you could go TB3-TB2 adapter > apple TB2 NIC
|
# ? Dec 1, 2016 21:40 |
|
I'm looking at setting up a home lab to help me study for the MCSA, and I'm hoping to get some recommendations. It is going to be in the corner of my living room so I'd like it to be small and quiet, and not being a power hog is a plus, too. I'd like to base everything off of ESXi, since I'd like to get some experience using that as well. I'm planning on running Windows Server 2012 r2 and two or three other Windows 10 VMs. My budget is around $600. Would one of these systems fit the bill? https://www.newegg.com/Product/Product.aspx?Item=9SIA6ZP4FH7280 https://www.newegg.com/Product/Product.aspx?Item=2NS-000M-002U0 https://www.newegg.com/Product/Product.aspx?Item=9SIA6YC4R32709 The Dell in the third link is the most attractive to me, since it has the more powerful processor and a decent chunk of RAM already. Any words of wisdom or advice?
|
# ? Jan 23, 2017 22:54 |
|
You're going to have a bottleneck either at Storage I/O, Memory or CPU. I don't think CPU will be an issue unless you try to do something crazy, so I'd focus on having adequate memory and maybe an SSD.
|
# ? Jan 23, 2017 23:47 |
|
I rock the TS140 w/ 32GB of ECC DDR RAM. The only thing you'll have to do is pick up an approved network adapter or roll some hacked drivers into the iso. I boot from a usb stick so my 4x SSD are all for creating VMs. Recent ESXI updates supports (as in it does work, but not "supported") nesting and they've rolled more features out so simulating hyper-v as if it's bare metal can be done. Should be really easy to stack a ton of VMs for nested domain homelabbing as long as you're ditching the GUI.
incoherent fucked around with this message at 03:13 on Jan 24, 2017 |
# ? Jan 24, 2017 03:07 |
|
Thanks for the replies. I've been looking as bit more, and this caught my eye: https://www.newegg.com/Product/Product.aspx?Item=9SIAC0F4XA9876 Dual processors, 16 threads, looks like I could add a bunch of memory down the road. The only things I would be concerned about are noise and power draw. Anybody have any experience with these?
|
# ? Jan 24, 2017 17:18 |
|
Have you looked into the Intel NUC line? They're designed for low noise and power draw. They can go up to 32GB, but are only dual core and don't have a lot of drive bays. If you're only looking to run a handful of VM's it could be a good fit.
|
# ? Jan 24, 2017 19:00 |
|
|
# ? May 16, 2024 19:08 |
|
I have a NUC that I run 6-ish VMs on at a time but it can be hard to keep it in the $600 price range once it's all kitted out. Really nice box, though, I'm pretty happy with it.
|
# ? Jan 24, 2017 19:03 |