Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
MC Fruit Stripe
Nov 26, 2002

around and around we go
Was still running Workstation 8 so I decided to uprgade to 10, and nothing about my lab is too precious, so I scrapped the whole thing. Why is it that every time I rebuild my home lab I spend all of my time getting pfSense to play nice when routing? You'd think I'd remember the step I'm overlooking by now, jimminy freakin hopskotch.

e: Not asking for help, it's going to be something silly that I overlooked, and I got myself into this mess, just venting a bit. I never learn.

e: Time to spin up a DC, admin PC, and 5 ESXi hosts, 40 minutes. Time to get pfSense to give me access to the internet, oh, bout three hours now. To be fair I keep distracting myself with the weirder parts of Youtube, but still, poo poo.

MC Fruit Stripe fucked around with this message at 08:16 on Oct 20, 2013

Adbot
ADBOT LOVES YOU

MC Fruit Stripe
Nov 26, 2002

around and around we go

thebigcow posted:

i've been there. the problem is the pfsense devs assume it is your internet connection instead of just a router and things get stupid.
I stayed up far later than is reasonable and STILL didn't figure it out. Grrrr. :(

But yeah I think you're right - every piece of information I found involved entering your ISP PPPoE credentials or something like that. Nooo, that is not the solution.

MC Fruit Stripe fucked around with this message at 18:07 on Oct 20, 2013

MC Fruit Stripe
Nov 26, 2002

around and around we go
.

MC Fruit Stripe fucked around with this message at 00:21 on Feb 11, 2014

MC Fruit Stripe
Nov 26, 2002

around and around we go
Just a bit of a throwaway question, but can anyone tell me where Openfiler is placing data before it flushes to disk? I created volume and presented it to an ESXi host, copied over 60gb of software, but could not actually find that data on my hard drive. Memory on my local system, the ESXi host, Openfiler, hell even the VM admin box I use to run vSphere, none of them showed any memory pressure. No increased file sizes, no swapping, nothing that I could find, but obviously that information was somewhere - it finally showed up in earnest when I shut down both ESXi and Openfiler, but until then the files were there, accessible, everything, just not actually registering on my harddrive. Where were they, any guesses?

A thoroughly unimportant question.

MC Fruit Stripe fucked around with this message at 08:02 on Oct 23, 2013

MC Fruit Stripe
Nov 26, 2002

around and around we go

kiwid posted:

Is there a better way to evaluate VMware vSphere and vCenter rather than reinstalling every 2 months?
Same question but for System Center, Solarwinds, and Veeam products. I install, life happens, come back, oh great I have 3 days left on my trial.

MC Fruit Stripe
Nov 26, 2002

around and around we go
Anyone have any kinda black fridayish tips? I'm planning to build a 2nd lab box, this one with 64gb of memory. I want that memory cheaply. I'll either build a desktop PC or get it in a server. Who has the deals this weekend?

MC Fruit Stripe
Nov 26, 2002

around and around we go
As little as possible, and yes.

I'm obsessed with running as many VMs as possible to make my environment and therefore the tasks, as complex as possible. Figure I will buy myself a new lab box during the holidays.

MC Fruit Stripe
Nov 26, 2002

around and around we go
That's nice dilbert and I appreciate you sharing your lab but I really have no idea what you're responding to. I was asking if anyone knew of any good Black Friday deals, and I asked here because it is within the context of upgrading my home lab.

MC Fruit Stripe
Nov 26, 2002

around and around we go
I did go ahead and enable (or rather, disable it), because I'll take any gain I can yet. But yeah I'm definitely overreaching with my home lab. I used the word complex earlier, but it's not really complicated, it's just a lot of stuff. I've got a SQL box, well I want to stand up 2 more in a cluster and set up replication between the standalone and the cluster. I have Exchange 2010, well I want to stand up 2013, snapshot, migrate, rollback, and migrate again. I have a bunch of Solarwinds and Veeam demos I want to run, which obviously require other boxes to be up otherwise they're monitoring thin air. I want to play with the full System Center suite. I want to run a Puppet VM, GNS3, this, that, and the other. Now they obviously don't all need to be on at the same time, not even close, but you can see how it starts adding up.

I don't think my lab is better, or that it gives me any particular insight, but I think you're (dilbert) coming from a strictly or overwhelming VMware point of view, whereas I've got my vSphere environment but then on top of it I'm trying to learn a little of this, a little of that. That's why I'm thinking about a second box with 64gb of memory in it, I figure between that and the existing 32gb, even I can surely run anything I'd ever want.

MC Fruit Stripe
Nov 26, 2002

around and around we go
God, there's 4 threads I could post this in. This is probably the least active thread of those, but also feels like the thread where people may have run into the issue.

I'm working through half a kernel of an unformed thought...

I currently run a lab domain and network in VMware Workstation on a pretty beefy desktop computer. That computer is also on my regular home network. My home network is 192.168.1.x and my lab network is 192.168.10.x. The two are bridged via a pfSense VM with 2 NICs, one attached to each network. This allows the lab network to have its own environment yet also get out the internet when it needs to. I like this set up.

I'm going to be standing up a second lab box. I'd like that lab box to be on the same subnet as the lab network. How am I going to do this, or what's my closest approximation?

Here's a drawing which illustrates the problem I'm anticipating when I have the second lab box set up.



VM1 pings VM2, but the ping reaches its first hop at pfSense, it sees a 192.168.10.x address and is like, uh that's not an IP I have information for, goodbye.

Of course then we get into the option of putting lab box 1 on 192.168.10.x and lab box 2 on 192.168.20.x, but even then it feels like there's going to be problems. For example, if I want to move a VM over to the other lab box, same situation, it won't know how to route to a 192.168.20.x IP on a 192.168.10.x subnet. And even during normal course of duty, the 192.168.10.x pfSense isn't going to know where to send information for 192.168.20.x hosts.

There are ways to do this, but none of them feel particularly graceful, so I'm curious how you guys would handle it.

MC Fruit Stripe
Nov 26, 2002

around and around we go

thebigcow posted:

edit: you'll have to change ip addresses if you move vms between subnets. Alternatively get a nic for the virtual machines on each box and plug them into a cheapo router like a Mikrotik and save yourself a lot of hassle.

This, for what it's worth, is where I'm leaning. I'm not ready to put a rack of Cisco equipment between the two boxes to simulate separate locations, but that's going to be the end goal and another NIC for each box would need to be part of that, so I think maybe this is simply going to be a step in that direction.

That plus static route might just do everything I ask of it, good show!

MC Fruit Stripe
Nov 26, 2002

around and around we go
What are you cats using to back up your VMs? I had the nightmare scenario - a single vmdk went corrupt in Openfiler, taking all of my nested VMs with it.

I could set up the disks in openfiler in a raid setup, I could periodically backup the entire openfiler VM and all disks. There's options here, what's the best one?

MC Fruit Stripe
Nov 26, 2002

around and around we go
Eh, point taken but this isn't anything to be diagnosed. I set up Openfiler, attached a large virtual hard disk to it, presented that as a LUN to a few ESXi hosts. One of the 10gb vmdk files that it created went corrupt, meaning all the information on the entire array is gone. There's no rebuilding it, it's just gone, because I didn't set it up like I would in a production environment with redundant disks etc. There's good break, then there's "guess I'm starting over". But yes, I need something in place before I get into this spot again. Veeam will probably be a good solution.

MC Fruit Stripe
Nov 26, 2002

around and around we go
Internet, I've priced a new lab box with 64gb of RAM, Core i7-4820K, 5 2TB HDDs* (4 for RAID 10, 1 for spare), for $1634.41. Do I need this?

* I was closer than you want to know to rolling my own iSCSI SAN...

MC Fruit Stripe
Nov 26, 2002

around and around we go
I'll call this a lab question rather than a virtualization question because running a bunch of ESXi hosts nested inside of Workstation on a Windows 7 box tends to be a bit rare in production.

Ever since I rebuilt my lab on 5.5 U1, hosts just decide to disappear on me. For example, here I am attempting to reconnect a host which has become unresponsive:


(and yeah that's my wife and then my name, I could blur it but pfft, I'll take that over some hilariously goony "dickbutt.lol" any day)

If I bounce the host it'll be fine for a while, but the problem will eventually pop up on this or another host, and round and round it goes. I thought it might be an overexcited Windows firewall so I disabled on the vCenter box, but that didn't resolve it. I have seen a few articles online about increasing the timeout on heartbeats but I really don't like that solution - it times out after 60 seconds, so 6 heartbeats, but it shouldn't really be missing any, it's not like this is a particularly complex environment.

Any recommendations for next things to check?

e: Just to cover basics, once this happens I can no longer ping the host by hostname or IP, and from the host itself I can't ping out either. Rebooting resolves it until it no longer resolves it.

MC Fruit Stripe fucked around with this message at 08:40 on May 15, 2014

MC Fruit Stripe
Nov 26, 2002

around and around we go
It's just Openfiler, iSCSI targets. I don't have the vSwitch on promiscuous but that's not something I normally set and I've never had this problem before, so I'd rather steer clear of additional variables, just like I didn't want to increase heartbeat timeout. It's like, okay that may hide the problem but it won't fix it.

I rebooted the host PC and the switch, and rebuilt everything again, and it is STILL happening. Honestly I'm wondering if something about 5.5u1 just doesn't play nice nested.

e: Suppose the best play at this point is to deploy a 5.5 host and add it to this vcenter and see how it's behaving. I've had a 5.5 lab up and running with no issue for months, and 5.5u1 isn't working because (reason), so time to change one piece at a time I guess.

e: You're probably labbing as much as anyone dilbert, have you set up a 5.5u1 environment inside of workstation 10? I'm trying to pair VMware-VIMSetup-all-5.5.0-1750795-20140201-update01 with VMware-VMvisor-Installer-5.5.0.update01-1623387.x86_64 and it hates me. I don't think I'd be surprised to find out there's a problem, since they're so new compared to workstation 10, maybe 10's just not up to snuff since this really isn't what it's meant for.

e: moved one of the 5.5 hosts over to this vcenter and am deploying 3 VMs to it right now, will see how that works out... in the morning!

MC Fruit Stripe fucked around with this message at 08:01 on May 16, 2014

MC Fruit Stripe
Nov 26, 2002

around and around we go
e: nope, that wasn't the fix :(

MC Fruit Stripe fucked around with this message at 12:12 on May 16, 2014

MC Fruit Stripe
Nov 26, 2002

around and around we go

Dilbert As gently caress posted:

I only use workstation to host my vCenter, domain controllers, SQLbox, and connection servers for view. I can give it a whirl tonight see if I can recreate the issue.
I've gone back through every iteration, the problem is me. :(

I had everything working on 5.5, then I went to 5.5u1 and nothing worked, and now I've stepped back piece by piece to 5.5, and nothing works. Something has gone pearshaped here, but I no longer believe it's any sort of incompatibility issue. I think something that I simply haven't accounted for has started acting up.

The box has two NICs each attached to different networks, so my first thought was that somehow some packets were going to the wrong network. This should not be possible, but assuming we live in a world where it is, I increased the metric on, and even disabled the 2nd NIC to make sure everything was traveling on the right wire, but that didn't fix it.

This sucks because my lab is dead in the water til I figure it out.

MC Fruit Stripe
Nov 26, 2002

around and around we go
I know DAF is on a break but here is what I found. I'd previously been creating my nested ESXi hosts with 3 NICs. (Note, every time I say NIC in this entire post I am talking about virtual NICs added to that VM). The problem actually started when I began using 4 NICs, not when I switched to 5.5 - the cardinal sin of switching 2 variables at once threw me off. I tried a few combinations, and then tried them again, and found the following, if this helps anyone.

With 3 NICs: it's fine
With 4 NICs: random host disconnects
With 4 NICs then edit the NIC in the .vmx from "e1000" to "e1000e": it's fine

So, at least in my lab, which is just ESXi nested inside of Workstation 10 on a Windows 7 box, I can run up to 3 NICs. 4 or more creates problems, unless I change the type of NIC.

Interesting! (More like pain in my rear end...)

MC Fruit Stripe fucked around with this message at 22:47 on May 18, 2014

MC Fruit Stripe
Nov 26, 2002

around and around we go
Sup my abandoned friends in this abandoned thread.

Any suggestions for backing up your home lab? Was thinking of an all-in-one solution, but I see two kinds of virtual machine which have their own needs.

For VMs running on nested ESXi hosts, you can treat them exactly like you would a production vSphere deployment. Point a copy of Veeam at it and back it up.
For VMs running in Workstation though, this is a little different. Within the context of the lab, these are your OS-on-baremetal servers, no hypervisor. How do we want to back these up?

And most importantly, is there one product that seems to cover both kinds of servers, VM and baremetal?

e2: I mean, if I'm honest, even since the time I posted this I've gotten settled on "oh just copy the entire virtual machine folder, who cares" for the baremetal servers, but I don't know, if there's one solution which lets me have graceful backups for both, that'd be ideal

MC Fruit Stripe fucked around with this message at 23:56 on May 25, 2015

MC Fruit Stripe
Nov 26, 2002

around and around we go
Holy potato, a fully functioning download of their flagship product? This is a great find, and I am now ecstatic that I posted my borderline incoherent question. Thank you!

Hopefully my post makes any lick of sense, the distinction between baremetal and virtual VMs. Probably anyone who runs nested VMs got what I meant.

MC Fruit Stripe fucked around with this message at 06:23 on May 27, 2015

MC Fruit Stripe
Nov 26, 2002

around and around we go

KillHour posted:

Can someone sanity check this before I spend all night implementing it? This only includes the IP stuff - I'll be adding a virtual SAN at some point.
I don't have anything to add on the sanity check portion, but I both wanted to make sure this wasn't missed by everyone else on the new page, and throw out major kudos on the complexity of the home lab, getting some real work done. Much better than the "I have ESXi and built a domain controller" approach. That looks awesome.

MC Fruit Stripe
Nov 26, 2002

around and around we go
I also would recommend i5 at a minimum, however I will say the same thing here that I have said every time I have posted - remember, it's a home lab, and not production. You're basically never going to be using more than about 2 of the VMs at a time. You might write a query on your Win7 box which'll hit your SQL box, or you might vMotion a VM from 1 host to another, but you're never going to be doing some massive stress test of your environment. Or, if you are, you'll have moved well past a single i3 or i5 box by then.

Hell when I build VMs at home, my life's goal is to use as little memory as possible to hammer as many VMs in as I can. My domain controller has like 800mb. In production I give it 4gb because 'eh', but at home, find the bare minimum I can use comfortably and go.

MC Fruit Stripe fucked around with this message at 20:12 on Mar 4, 2016

MC Fruit Stripe
Nov 26, 2002

around and around we go
The only problem with letting someone set up a lab environment is that it uses company resources in the way of datacenter rack, power and cooling. If you're concerned about it from a network standpoint then your network guys aren't on top of things.

MC Fruit Stripe
Nov 26, 2002

around and around we go
Think of all those people using GNS3 and all the trouble they're going through to purchase every piece of hardware they're emulating, and you pretty much have your answer.

MC Fruit Stripe
Nov 26, 2002

around and around we go
I am now running my home lab on Windows 10 and have a problem I didn't account for. The new Windows Update model of "you're getting this patch. you're rebooting." is already proving to be a bit of an issue with keeping my lab up. Go to log into server, no response. Or I try to log into a server hosted on another box, authentication issue because AD has disappeared. This, because the Windows 10 computer running VMware Workstation has rebooted and all of those VMs have been powered off.

Obviously I'm not the first person to run into this. I feel like I've got a couple of options here, and I'm not wild about any of them. Disable Windows updates. Set my account to auto-login, then start VMware workstation and the VMs automatically. Neither of those thrill me.

Anyone else running their VMware lab on Windows 10 and found a graceful solution to the rather constant reboots?

MC Fruit Stripe
Nov 26, 2002

around and around we go

Perplx posted:

Ideally you want something else as your hypervisor, windows server, esxi, linux etc.
You can also setup a wsus server on your vm so you can install when you want, and lastly windows updates shouldn't be a surprise, they come out on the 2nd tuesday of the month at 10am PST just set a reminder for yourself.
Yeah it looks like I just got surprised by how many updates and reboots it needed to do. Over the first 3 days of running this PC, I'd guess it rebooted 5 times for updates. None since, now that it's caught up. The initial blast of reboots definitely gave me a "whoa, this is not going to be a very reliable lab" concern, but we're fine.

MC Fruit Stripe
Nov 26, 2002

around and around we go
Looking for advice and my best approach on SAN software for a nested lab. Please read my specific scenario.

My home lab consists of 3 normal desktop PCs which all serve other purposes. All I did was take 3 computers that receive daily use (one downstairs, one upstairs, and the media PC) and slap a ton of memory (64, 64 and 32) and an extra HD in them. Essentially, the unused cycles on those computers make up my home lab. No external SAN, just 3 computers. On memory alone I can run a pretty massive lab, and the HDDs, well, maybe I wouldn't want to encode video on 3 VMs at once, but I get by.

Then I run VMware workstation on each, with Openfiler and ESXi hosts installed on each, with those all added to a vCenter server. So we have PC1, and on that I'll have PC1SAN, PC1ESX1 PC1ESX2 and PC1ESX3, which in vCenter all get added to cluster PC1. Then I can vMotion between the different PCs or SANs. So that's the setup - I want to be clear about that because it means I don't have a tremendous amount of horsepower. No whitebox SAN here, just some Openfiler installs in Workstation.

I've used iSCSI via Openfiler forever, but I'm starting to wonder if I should migrate to something else. I just bought VMUG Advantage/EvalExperience, so I'll have an install of VSAN for the home lab. Should I stick with Openfiler? Move to VSAN? FreeNAS? Something else?

The only caveat is that it has to run decently on my existing setup. Yeah I don't have a ton of IOPS running it like this, but I've yet to figure out how to be more than one person doing more than one thing at a time, so the slowness has never been much of a factor. That said, I don't have a lot of IOPS to spare, so I can't install something that's going to need a legit 4 vCPU and 8GB of memory on each box. Low footprint, good performance, what's my best bet here?

MC Fruit Stripe
Nov 26, 2002

around and around we go
The specific use case, in this scenario, is definitely VMware stack, so that's a big plus to VSAN. The SANs and ESXi hosts exist solely to give me a vSphere environment to play with at home. Any VM which might need a little more horsepower I'd just build as a traditional VM in Workstation. Combining both of your replies then, it sounds like my best plan would be to stick with Openfiler on 1 or 2 of the SANs, then run VSAN on the other 1 or 2, knowing that performance is going to go down a bit but probably still livable. And there's no third option I really need to concern myself with, just run some combination of Openfiler and VSAN.

e: VSAN installed with the knowledge that I may need to pick up at least a 2nd hard drive for a box that's running it.

MC Fruit Stripe fucked around with this message at 17:01 on Oct 15, 2016

MC Fruit Stripe
Nov 26, 2002

around and around we go
I should be able to build it in a nested environment though, no? 3 ESXi VMs on the same PC, each with an additional, empty drive for VSAN to use. Slow or not, without having even Googled it yet, someone has installed VSAN inside a workstation environment, surely.

MC Fruit Stripe
Nov 26, 2002

around and around we go

big money big clit posted:

Sure, you can nest it, but it's going to run like poo poo so you'll never run VMs on it so all you'll really be testing is setting it up, which you could just do just as well with VMware hands on labs. VSAN setup is like a 10 minute task. It's really not worth going through the trouble of doing it at home if you aren't actually going to use it.
If I get a decently sized SSD and then partition that off to a few nested ESXi hosts, maybe start thinking VSAN then?

MC Fruit Stripe
Nov 26, 2002

around and around we go
Do you guys run your labs on your home network or have them segregated? If you have them on the home network, how do you handle DHCP and DNS?

I want to do away with my lab network and have everything on the home network. I want devices on the home network to continue as they are, pulling DHCP from the router, resolving DNS in the usual way, etc. In the lab, though, I'd like to continue having that DNS and DHCP. I'm not seeing my angle here.

MC Fruit Stripe
Nov 26, 2002

around and around we go
Yeah DNS is easy if I set it up manually, but if addresses are handed out via DHCP then we have a problem, because I want some devices receiving DNS1 and some receiving DNS2.

Working through some issues on the home network which are being caused by having two networks. I think I've got it narrowed down to two solutions, I can either 1) completely segregate the two networks by not configuring a default gateway on the lab NICs, or 2) run my home and lab off the same network while maintaining separate DHCP and DNS by ____.

It's what goes in ____ that has me thrown. I mean it may not even be possible with the equipment that I have (no VLAN capability) but I'm at least trying to look at options.

MC Fruit Stripe
Nov 26, 2002

around and around we go
I love your advice, let me read up on the links provided and see what looks appropriate for my setup. Thanks all!

Adbot
ADBOT LOVES YOU

MC Fruit Stripe
Nov 26, 2002

around and around we go
God why do you people keep buying servers you confuse me so much. Just slap a bunch of memory in whatever desktop you're already using, call it LAB1, do the same thing for LAB2 down the line, hook them to a switch, and that's your lab.

e: this post has been made from

  • Locked thread