|
You could probably build a decent ITX Xeon system for less than a NUC
|
# ? Jun 17, 2019 00:05 |
|
|
# ? May 9, 2024 07:54 |
|
A Dell PowerEdge server would be nice but anything with a Xeon and a good SSD or three would probably make a nice machine too. Throw a couple NIC’s in there too.
|
# ? Jun 17, 2019 02:02 |
|
i failed to include in my post that there are uATX xeon boards/cases out there You can get a c226 board for $60 Xen is...really not where the future is at. Today is about a huge variety of container stacks atop kvm, esxi, and (to a lesser degree) hyper-v Potato Salad fucked around with this message at 01:57 on Jun 18, 2019 |
# ? Jun 18, 2019 01:55 |
|
Potato Salad posted:Xen is...really not where the future is at. Today is about a huge variety of container stacks atop kvm, esxi, and (to a lesser degree) hyper-v Eh. Depends on what you mean. Xen is far better as an open source lab solution than ESXi. I'd rather use Xen XCP than ESXi, as it has more community support and more features that you can use without a license. Now, Enterprise? I agree there.
|
# ? Jun 18, 2019 12:10 |
|
CommieGIR posted:Eh. Depends on what you mean. Xen is far better as an open source lab solution than ESXi. Do you mind expanding on why? I haven't used Xen in about 10 years but I'm opening to hear what advantages it has on the low-end
|
# ? Jun 18, 2019 14:11 |
|
Bob Morales posted:Do you mind expanding on why? I haven't used Xen in about 10 years but I'm opening to hear what advantages it has on the low-end Well, for one not having to pay to get the Management Console. Since XCP is largely community driven and free-as-in-beer, and you can use almost all of the Enterprise grade features that normally you'd pay for in VMware (or even Citrix Xenserver for that matter), I don't see VMWare as a competitor for lab use. Not to mention a lot of advanced storage features and powerful utilities for managing your lab like Xen Orchestra. Sure, you could argue that Using VMWare strictly by console gets you more used to managing an ESXi box, but honestly, most people want virtualization at home to use the Hypervisor, not to learn VMware. Don't get me wrong: From an Enterprise perspective, VMware is much more mature. Caveat: I haven't played with KVM yet, so hence why I have no opinion on it.
|
# ? Jun 18, 2019 14:32 |
|
CommieGIR posted:Caveat: I haven't played with KVM yet, so hence why I have no opinion on it. This is exactly why I try to tell people to use KVM in home labs, or just Hyper-V built in to W10 (or if you wanna become a Hyper-V Powershell pro, Hyper-V server). Use things you might actually see in a work situation. I used Xen (and XenServer) for years at work, and now it's pretty dead everywhere I've worked or seen for the past 5+ years. Final nail in Xen coffin was when AWS rolled their own flavor of KVM and stepped away from using Xen.
|
# ? Jun 18, 2019 15:27 |
|
TheFace posted:This is exactly why I try to tell people to use KVM in home labs, or just Hyper-V built in to W10 (or if you wanna become a Hyper-V Powershell pro, Hyper-V server). Use things you might actually see in a work situation. Maybe I'll move to KVM down the road, but Xen is still keeping me very happy.
|
# ? Jun 18, 2019 16:46 |
|
Potato Salad posted:i failed to include in my post that there are uATX xeon boards/cases out there Everyone posted:Xen is kinda dead
|
# ? Jun 18, 2019 17:38 |
|
A good kvm management stack will have great free features and be applicable to real world ops It's going to be a little bit harder to pick up and you have to do a lot more reading, but when you come out the other end you're going to have some pretty cool skills
|
# ? Jun 18, 2019 19:57 |
|
CommieGIR posted:Eh. Depends on what you mean. Xen is far better as an open source lab solution than ESXi. Marinmo posted:I just want cheap, manageable and hopefully performant setup, my list of priorities being in that order too. evil_bunnY fucked around with this message at 12:17 on Jun 19, 2019 |
# ? Jun 19, 2019 12:14 |
|
evil_bunnY posted:If that's really the case then some form of desktop ("type 2")virtualization might be your best bet.
|
# ? Jun 19, 2019 17:38 |
|
Read locks usually aren’t exclusive
|
# ? Jun 19, 2019 18:12 |
|
What's really cool about discussing NFS locking behavior is that locking isn't part of NFSv2/v3 at all, servers don't care whether or not you lock any files whatsoever, and it's implemented through a separate protocol (NLM) that is respected by some but not all NFS clients.
|
# ? Jun 19, 2019 19:12 |
|
I know VMware has a thing where it limits a file to 5 read locks max, everything after that was rejected. Made problems when a bunch of vms were mounting the same ISO on a datastore. Maybe other hypervisors have similar restrictions.
|
# ? Jun 19, 2019 19:19 |
|
Vulture Culture posted:What's really cool about discussing NFS locking behavior is that locking isn't part of NFSv2/v3 at all, servers don't care whether or not you lock any files whatsoever, and it's implemented through a separate protocol (NLM) that is respected by some but not all NFS clients.
|
# ? Jun 19, 2019 20:07 |
|
evil_bunnY posted:Lmao I’m so glad I never have to worry about this.
|
# ? Jun 19, 2019 21:34 |
|
evil_bunnY posted:Lmao I’m so glad I never have to worry about this. This made for some real fun if you ever tried to do multiprotocol access on a NetApp volume since CIFS has mandatory locking and NFS has advisory locking and getting those to play nicely on the same file is basically impossible. Mostly it works fine though since any application that cares about locking and supports NFS just handles it out of band through lock files or some other mechanism, since even NLM support isn't always a given.
|
# ? Jun 19, 2019 21:46 |
|
YOLOsubmarine posted:This made for some real fun if you ever tried to do multiprotocol access on a NetApp volume since CIFS has mandatory locking and NFS has advisory locking and getting those to play nicely on the same file is basically impossible.
|
# ? Jun 19, 2019 21:51 |
|
Vulture Culture posted:Storage is the printers of the datacenter
|
# ? Jun 19, 2019 21:59 |
|
YOLOsubmarine posted:This made for some real fun if you ever tried to do multiprotocol access on a NetApp volume since CIFS has mandatory locking and NFS has advisory locking and getting those to play nicely on the same file is basically impossible.
|
# ? Jun 19, 2019 22:24 |
|
So uh, using NSX in your VMware powered '' platform would be able to prevent tenants from just assigning IP addresses that they haven't been allocated (in this case, the gateway) to their VMs and killing Internet access for an entire subnet, wouldn't it?
|
# ? Jul 5, 2019 19:19 |
|
Anyone have any issues installing SRM 8.2? It may have to do with my vCenter SSO setup here, but switching from external PSC to embedded has me questioning where I have best practices here. These are all in different datacenters. - vCenter1 (SSO/vsphere.local master), VR/SRM installed fine and runs fine, registered with vCenter1 - vCenter2 (connected to vsphere.local on vCenter1), VR/SRM installed, but SRM errors out trying to register with vCenter2 - vCenter3 (connected to vsphere.local on vCenter1), no SRM needed - vCenter4 (connected to vsphere.local on vCenter1), no SRM needed If I open vCenter1 and look at "site recovery", both SRM installations show up, but vCenter2's install has a red box below it saying that SRM isn't installed, and I get a 404 when trying to go to the SRM DR page (https://srm-vcenter2/dr) Any ideas? Using 8.2 OVF application for everything, they're not Windows installs. Are people using that sort of hub-and-spoke setup with ESCs? Or should I be making each datacenter's vCenter its own SSO master, and then joining it to the company AD?
|
# ? Jul 10, 2019 15:05 |
|
Considering blowing away my seldom-used FreeNAS server to try Azure Stack dev kit. Server itself is an R710 with 96gb and plenty of disk. Also have an R620 with 128 currently running vSphere setup Just trying to decide what my options are. I gather the ASDK is intended for only single node setups, though it would be pretty nice to run the Azure Stack support services on the 710 and leave the 620 for pure compute. Not sure if that’s possible though config change or if ASDK is actually factually locked down to single node operation. I’ve done all of five seconds research on this, spurred on only by the fact that I have one VM running on my beefy 620 and I’m fine blowing it away to try something new. If this is a terrible idea I’m completely open to that idea.
|
# ? Jul 10, 2019 16:23 |
|
COOL CORN posted:Anyone have any issues installing SRM 8.2? It may have to do with my vCenter SSO setup here, but switching from external PSC to embedded has me questioning where I have best practices here. These are all in different datacenters. sweet heavens, using AD binds will be so much easier more resilient, too
|
# ? Jul 10, 2019 17:49 |
|
Thanks Ants posted:So uh, using NSX in your VMware powered '' platform would be able to prevent tenants from just assigning IP addresses that they haven't been allocated (in this case, the gateway) to their VMs and killing Internet access for an entire subnet, wouldn't it? Yes, you could use SpoofGuard to do this.
|
# ? Jul 10, 2019 22:54 |
|
Thanks, I was sure there must be something but don’t know the product so couldn’t pinpoint the relevant feature.
|
# ? Jul 10, 2019 23:34 |
|
I'm in the process of re-configuring my home lab and have networking question. My (planned so far) setup is: VM Host1: CentOS(maybe) KVM on AMD 1700x/64GB, one 1gb ethernet port VM Host2: Windows Server 2016 Hyper-V on Intel 4790k/16GB, one 1gb ethernet port Storage/VHD Store: FreeNAS on Intel E3-1220 v3/32GB, 6 4TB in RAID10, two 1gb ethernet ports All connected to an 8-port managed switch Because the storage server is the only one with multiple network interfaces, I wouldn't be able to utilize iscsi multipathing on the two hosts, correct? I have the ability to use link-aggregation between the storage and switch, so if I'm not able to use multipathing I assume it would probably be best to enable LAG.
|
# ? Jul 15, 2019 22:26 |
|
Actuarial Fables posted:I'm in the process of re-configuring my home lab and have networking question. My (planned so far) setup is: Why even attempt to use iSCSI? Just aggregate the links and use NFS and SMB. You can technically use MPIO with a single NIC because the storage end has two NICs (so two paths, Server NIC to Storage 1, and Server NIC to storage 2). I just don't think the juice is worth the squeeze when you can just go file instead of block.
|
# ? Jul 15, 2019 22:53 |
|
TheFace posted:Why even attempt to use iSCSI? I want to try it out! quote:You can technically use MPIO with a single NIC because the storage end has two NICs (so two paths, Server NIC to Storage 1, and Server NIC to storage 2). I just don't think the juice is worth the squeeze when you can just go file instead of block. Makes sense. I was reading up on aggregating links in FreeNAS and noticed that they recommended leaving the links separate for iSCSI if multipathing was desired. I doubt I would notice any performance difference either way, but it's a lab so hey.
|
# ? Jul 15, 2019 23:04 |
|
Actuarial Fables posted:I want to try it out! If you're gonna do iSCSI for the experience of doing iSCSI definitely figure out MPIO. You're right you're not really going to see much if any performance benefit since your hosts are single NIC but it's worth doing if you never have before. It's pretty easy in both hypervisors you have going (actually it's fairly easy in pretty much anything I've ever done). TheFace fucked around with this message at 14:15 on Jul 16, 2019 |
# ? Jul 16, 2019 14:10 |
|
TheFace posted:Why even attempt to use iSCSI? Just aggregate the links and use NFS and SMB. You can pry iSCSI out of my cold dead hands. To be fair, we are running dual 10gbe iSCSI links with MPIO from our ESXi hosts. Slightly different use case than what you quoted.
|
# ? Jul 16, 2019 16:56 |
|
Moey posted:You can pry iSCSI out of my cold dead hands. Yeah, anything in our datacenter that isn't on vSAN is on some form of iSCSI SAN. Used to do a ton of FC and just prefer iSCSI now.
|
# ? Jul 16, 2019 17:16 |
|
I miss working with FC, zoning was neat and fun
|
# ? Jul 17, 2019 01:59 |
|
I never considered FC to be interesting until i started working with it. Unlike iSCSI it either works perfectly or everything is hosed. The native multipathing is a nice extra.
|
# ? Jul 17, 2019 07:39 |
|
SlowBloke posted:I never considered FC to be interesting until i started working with it. Unlike iSCSI it either works perfectly or everything is hosed. The native multipathing is a nice extra. Perfect linear link aggregation is really nice too, the idea that to add bandwidth just add a port is awesome.
|
# ? Jul 19, 2019 17:41 |
|
FC is coming back around thanks to NVMe. You can do NVMe over Ethernet of course, but FC-NVMe is what is getting most of the early support out of the gate. And its gonna be a lot easier to gently caress up an Ethernet NVMe network than an iSCSI one.
|
# ? Jul 20, 2019 06:22 |
|
Is this the place to ask about docker?
|
# ? Aug 15, 2019 14:44 |
|
bolind posted:Is this the place to ask about docker? There's probably more docker nerds in the CI/CD thread. This thread tends to be for more traditional virt stuff.
|
# ? Aug 15, 2019 15:00 |
|
|
# ? May 9, 2024 07:54 |
|
VMs within VMs.
|
# ? Aug 15, 2019 16:13 |