Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

HPL posted:

Windows 10 Hyper-V doesn't do NIC teaming. But overall, Windows 10 is surprisingly powerful.

The management interface is extremely bare-bones and you'll likely want to pursue 3rd party tools, but the price is hard to beat and many of the features are hitting "good enough" so its forgivable, especially if you're looking for a VDI solution. 2016 with true DirectX abstraction has been great in my lab tests and AMD is saying they can do 32 concurrent seats on a S9300x2 so 64 total on a U2 box which VMware can't touch right now.

Adbot
ADBOT LOVES YOU

HPL
Aug 28, 2002

Worst case scenario.
Anyone ever get Windows Server 2016 Hyper-V working in a KVM guest VM?

evol262
Nov 30, 2010
#!/usr/bin/perl
Yes.

Enabled nested virt (add nested=1 to kvm_intel or kvm_amd, depending on which one you use). Use -cpu host (or "Copy Host CPU Configuration" in libvirt). That's all.

I literally just did this this week to see how the nesting performed.

HPL
Aug 28, 2002

Worst case scenario.

evol262 posted:

Yes.

Enabled nested virt (add nested=1 to kvm_intel or kvm_amd, depending on which one you use). Use -cpu host (or "Copy Host CPU Configuration" in libvirt). That's all.

I literally just did this this week to see how the nesting performed.

Already had both those set. Didn't work for me. That's why I'm asking. Were you using 2016 or 2012 R2? BIOS or UEFI? Not that it should matter. I tried adding Hyper-V to 2016 and it said that it detected a hypervisor and wouldn't install.

evol262
Nov 30, 2010
#!/usr/bin/perl
I used 2016.

I always use EFI.

I also don't use libvirt very often. Plain qemu was fine. But you may need to turn off hyperv enlightenments and set kvm to hidden in the libvirt xml if Microsoft is gonna be lovely like that.

Kachunkachunk
Jun 6, 2011
Kinda lovely, but I also can understand that they probably don't want customers assuming perfect execution of their nested Hyper-V hypervisors and their VMs. It takes active development effort to ensure it all works properly (and quickly), or you'll just end up with some very salty users and worse, potential data loss. It doesn't matter if you label it all with a CYA message like "Experimental."

evol262
Nov 30, 2010
#!/usr/bin/perl
It's more like "detecting whether you're running virtualized and disabling functionality is something nvidia does"

Check for the CPU flag. If it's there, enable virt. Bugs/cases about double nested virt get closed as unsupported.

You don't need to mark it as "experimental". Normal Hyper-v has never said "nuh-uh" if it's running nested in VMware//KVM/Xen before. This is a change in the wrong direction.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
save me 5 minutes of research -- is it yet possible to create a VM on USB storage in ESXi? I want to have a diskless server that passes an RDM into a single guest. In versions past it was not possible, I'm just wondering if it's possible in 6. I'm not planning to put any IO on the USB disk.

Potato Salad
Oct 23, 2014

nobody cares


Yes. I do this at home. It's just a datastore.

Edit - here's a guide that looks accurate after a quick read-over. http://www.virten.net/2015/10/usb-devices-as-vmfs-datastore-in-vsphere-esxi-6-0/

Potato Salad fucked around with this message at 04:29 on Jun 1, 2016

Wicaeed
Feb 8, 2005
School me on the VMware SDN options.

We've got teams (DevOps/QA) that love AWS. I'll admit it's fairly flexible to scale up new resources & then scale then down again when no longer in use, but I have some misgivings about the cost associated with running a large number of resource hungry VMs on AWS. I myself work on the Datacenter Ops side, which includes fairly heavy VMware/UCS administration. I love our platform, but it does lack flexibility right now since our DNS server/DHCP servers are statically configured (no Dynamic DNS, no static dhcp bindings), and our network (VLANs/IP Subnets) is completely statically configured as well.

One of our leadership's goals for my team is to provide faster turnaround for new environments for both or QA & Dev departments. I've done the best I can from my end (templating VM's that hadn't been done before, creating new Customization profiles for departments, etc), however now I'm coming to a point at which I feel like I can't really do much more, without new tools from VMware. At the same time, our Dev team is looking into tools like Packer & Terraform to allow us to rapidly provision (and de-provision) pre-configured environments for both Dev/QA/Stage/etc, and eventually even our Prod environments. This is a fairly massive effort that is still ongoing, as it requires a rewrite of our entire stack, so at least I've got time on my hands.

AWS provides some really flexible options for creating new networks/ip ranges & making sure that they remain secure and segmented between Prod environments and stage environments. I've quickly found out that VMware's vanilla offerings for vCenter don't really allow for this and to get that functionality you need to start looking into vCloud, or even NSX.

I don't even have a VCP yet (still working on it) and haven't done the HOL yet for either vCloud or NSX, but if I wanted to start looking at our options which should I look at first?

Also making it more complicated is our Cisco UCS. I know it's powerful, and I love how easy it is working with the platform, but I doubt we're doing even 15% of what UCS can do. I imagine that since SDN involves creating & deleting VLANs/Subnets from the network, there's some integration that I need between Cisco & VMware too.

evol262
Nov 30, 2010
#!/usr/bin/perl
Maybe putting the cart before the horse. I think the first question to ask yourself is "do I have a use case for SDN?" Which often means "do I have multi-tenancy?" There are some use cases for it outside of that (many of which comprise applications which rely on network isolation, particularly legacy applications, and trying to run multiple QA environments of your old application may exceed what preconfigured VLANs can do), but "can we pre-define some VLANs, attach them to vswitches, and provision/de-provision environments for QA/Dev/prod on those?) is a reasonable question.

Doing it on-the-fly with NSX/Calico/Neutron/AWS is really useful as your environment expands, and you start creating environments from whole cloth (or providing dev/qa groups the ability to segment off their own environments/projects with the click of a button), but it really doesn't sound necessary for what you're describing.

What do you want to do, and where do you see SDN fitting into that picture? There are likely a lot of smaller pieces to get orchestrated first (with static VLANs, or, if you have 1000Vs, you may never need NSX/etc).

I know this is a non-answer, but "if I wanted to look at our options, which should I look at first?" -- I'd look at "do I have a use case for SDN? And can my existing UCS environment already do what I need through network overlays?" (if you have 1000Vs -- yes. 7000 with some openflow work). Unless you can find a very clear use case, I'd put it on the back burner and work on getting the rest of your continuous integration/deployment/whatever environment set up.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Wicaeed posted:

School me on the VMware SDN options.

We've got teams (DevOps/QA) that love AWS. I'll admit it's fairly flexible to scale up new resources & then scale then down again when no longer in use, but I have some misgivings about the cost associated with running a large number of resource hungry VMs on AWS. I myself work on the Datacenter Ops side, which includes fairly heavy VMware/UCS administration. I love our platform, but it does lack flexibility right now since our DNS server/DHCP servers are statically configured (no Dynamic DNS, no static dhcp bindings), and our network (VLANs/IP Subnets) is completely statically configured as well.

One of our leadership's goals for my team is to provide faster turnaround for new environments for both or QA & Dev departments. I've done the best I can from my end (templating VM's that hadn't been done before, creating new Customization profiles for departments, etc), however now I'm coming to a point at which I feel like I can't really do much more, without new tools from VMware. At the same time, our Dev team is looking into tools like Packer & Terraform to allow us to rapidly provision (and de-provision) pre-configured environments for both Dev/QA/Stage/etc, and eventually even our Prod environments. This is a fairly massive effort that is still ongoing, as it requires a rewrite of our entire stack, so at least I've got time on my hands.

AWS provides some really flexible options for creating new networks/ip ranges & making sure that they remain secure and segmented between Prod environments and stage environments. I've quickly found out that VMware's vanilla offerings for vCenter don't really allow for this and to get that functionality you need to start looking into vCloud, or even NSX.

I don't even have a VCP yet (still working on it) and haven't done the HOL yet for either vCloud or NSX, but if I wanted to start looking at our options which should I look at first?

Also making it more complicated is our Cisco UCS. I know it's powerful, and I love how easy it is working with the platform, but I doubt we're doing even 15% of what UCS can do. I imagine that since SDN involves creating & deleting VLANs/Subnets from the network, there's some integration that I need between Cisco & VMware too.

NSX can provide you overlays via VXLAN, load balancers, firewalls, and routers through a combination of kernel modules and virtual appliances. It integrates with a couple cloud management platforms out there.

I've got a number of customers that basically just "rubber stamp" out copies of their network for testing, QA and even to eventually roll it to production.

If you don't need to provision a lot of network services then you can probably just look at something like embotics or some other cloud management tool.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
my environment doesn't match yours, but we looked at nsx and honestly it did a lot of awesome poo poo. It just didn't do enough to justify the significant cost. What is your real need for sdn? Do you really need it? Can't you provision the necessary networks in advance for hte team and just let them provision VMs via API? How big is the dev/qa environment that this is a real need?

Wicaeed
Feb 8, 2005

adorai posted:

my environment doesn't match yours, but we looked at nsx and honestly it did a lot of awesome poo poo. It just didn't do enough to justify the significant cost. What is your real need for sdn? Do you really need it? Can't you provision the necessary networks in advance for hte team and just let them provision VMs via API? How big is the dev/qa environment that this is a real need?

Sr Engineering Director who was in a meeting with myself, our Director of Engineering and the Sr. Developer going over the plans for our new CI/CD pipeline by demoing Hasicorp Packer and Terraform.

Without going too much into detail, Terraform lets you build multi-tier applications from pre-made images (built by Packer). We're going to be using it to build on demand testing/stage environments to allow our Dev/QA departments to test code faster. What terraform/AWS bring to the table is allow us to provision/deprovision AWS VPC resources (https://www.terraform.io/docs/providers/aws/r/vpc.html) quite dynamically.

What I got from our Director of Engineering was basically "Can our existing VMware environment do this?"

I know Terraform can build/deploy images to VMware vCenter (https://www.terraform.io/docs/providers/vsphere/index.html) but I'm trying to find out if we can even do the more dynamic parts of Terraform without having to use VMware SDN.

I have a feeling that the next inevitable question (if they can get it to work) is going to be asking why we are using VMware if our deployment pipeline isn't fully compatible with it.

Wicaeed fucked around with this message at 09:10 on Jun 6, 2016

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
I'd ask if they see themselves needing to frequently provision networks or if you can just hand them a pool of networks that they have free reign over to select IPs/provision into. Depending on the applications/what the developers need you probably don't need to provision on-demand networks but if you have a use case that requires it then NSX would be worth looking into.

VMware actually very recently dropped the pricing on the bits of NSX that would provide this capability https://www.vmware.com/products/nsx/compare.html (it's in the lowest tier.) I don't think they've done a good job communicating this though. I just spent the better part of 20 minutes trying to figure out if I could even tell you that. Basically the edition you need is going to be ~2000 per socket list (as opposed to the previous 6000 a socket list.) Adorai, it may be worth revisiting if the lowest tier addresses your problems.

This could allow you the flexibility to provide 'VPC-like' networking, floating IPs, etc. In fact you could potentially couple it with VIO (VMware integrated openstack) and have a lot of the same APIs/interfaces.

If they're asking "why vmware" then the answer is pretty much "my job as Wicaeed is to support the poo poo you build and I need to understand the infrastructure to do it!"

I would basically sit down and go over a few scenarios that will be common for them. Good odds that you can make Terraform work for a lot of it using the default vmware networking even if they're doing all the fancy poo poo hashicorp is selling.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Wicaeed posted:

Sr Engineering Director who was in a meeting with myself, our Director of Engineering and the Sr. Developer going over the plans for our new CI/CD pipeline by demoing Hasicorp Packer and Terraform.

Without going too much into detail, Terraform lets you build multi-tier applications from pre-made images (built by Packer). We're going to be using it to build on demand testing/stage environments to allow our Dev/QA departments to test code faster. What terraform/AWS bring to the table is allow us to provision/deprovision AWS VPC resources (https://www.terraform.io/docs/providers/aws/r/vpc.html) quite dynamically.

What I got from our Director of Engineering was basically "Can our existing VMware environment do this?"

I know Terraform can build/deploy images to VMware vCenter (https://www.terraform.io/docs/providers/vsphere/index.html) but I'm trying to find out if we can even do the more dynamic parts of Terraform without having to use VMware SDN.

I have a feeling that the next inevitable question (if they can get it to work) is going to be asking why we are using VMware if our deployment pipeline isn't fully compatible with it.
I'm currently doing dynamic provisioning of VPCs on AWS with Terraform and my response to this for on-premises applications is still "who gives a poo poo?"

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Vulture Culture posted:

I'm currently doing dynamic provisioning of VPCs on AWS with Terraform and my response to this for on-premises applications is still "who gives a poo poo?"
Is that because you think it is trivial for your internal support to do it quickly and easily?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

Is that because you think it is trivial for your internal support to do it quickly and easily?
I'm my internal support, and I have better things to do. Don't get me wrong, I think there are some really great use cases out there, and I'm sure there's a small niche of shops that want both on-premises infrastructure and capabilities that match public cloud. But doing that yourself is neither cheap nor easy, and even as someone who runs a large-footprint OpenStack cluster today I think that's almost always the wrong choice.

Vulture Culture fucked around with this message at 04:46 on Jun 7, 2016

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Vulture Culture posted:

I'm my internal support, and I have better things to do. Don't get me wrong, I think there are some really great use cases out there, and I'm sure there's a small niche of shops that want both on-premises infrastructure and capabilities that match public cloud. But doing that yourself is neither cheap nor easy, and even as someone who runs a large-footprint OpenStack cluster today I think that's almost always the wrong choice.
I see, i misinterpreted your comment. Are you saying that someone who thinks they want public cloud capabilities in their internal infrastructure might not really understand what that entails?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

I see, i misinterpreted your comment. Are you saying that someone who thinks they want public cloud capabilities in their internal infrastructure might not really understand what that entails?
That's part of it, but I think most people misjudge that cost/value proposition versus just moving applications that benefit from that type of infrastructure up to a public cloud.

cowboy beepboop
Feb 24, 2001

Hey all, finally getting into investigating containers. Some dumb questions:

Is it common practice to put the management interfaces on their own IP & VLAN, most CoreOS/Docker docs seem to imply you assign IPs onto eth0 and be done with it. and then docker NATs the traffic? weird imo.

What' the preferred container host these days? I've been looking at
* CoreOS
* Project Atomic
* DC/OS (on top of CoreOS?)
* SmartOS
* Project FIFO

Is this even the right thread it's not strictly virtualisation :ohdear:

Hadlock
Nov 9, 2004

We're using the full CoreOS everything, turns out that was a mistake, continued..

We're switching to Kubernetes orchestration on top of CoreOS (instead of fleetctl on top of CoreOS) with Cadvisor and Prometheus for stats tracking, and probably a Grafana front end.

A year ago nobody knew what direction anything was going in, my boss picked fleetctl on top of coreos, but we were at CoreOS Fest last month and the whole industry seems to be settling on this stack:

Kubernetes on top of (insert OS here, but CoreOS is free and lightweight) with Cadvisor and Grafana

Kubernetes is built by Google and builds on top of their ~9 years of experience of running proto-containerized apps and all the tools seem to work with it, it'st the gold standard. It works with Docker format containers and by August should also work with Rkt format (CoreOS's baby) containers, and after that, other stuff too.

Hadlock
Nov 9, 2004

my stepdads beer posted:

Hey all, finally getting into investigating containers. Some dumb questions:

Slightly unrelated/off topic, I added a job opening to the SH/SC job thread, we're looking for talented site reliablity engineers with an interest in (because nobody actually has experience with at this point) or experience with containers:

http://forums.somethingawful.com/showthread.php?threadid=3075135&pagenumber=91&perpage=40#post460825780

Come join me struggle through deploying container-based enterprise applications

Cidrick
Jun 10, 2001

Praise the siamese
We're using Mesos + Marathon, aka "DC/OS by hand". The OS powering the Mesos slaves/agents are currently CentOS on-premises, but we're starting to deploy in AWS and using CoreOS for the slaves/agents instead, since it's trivial to spin up and stop a bunch of Mesos slaves using cloud-init. The slaves which are what are running docker only have an eth0 and a docker0 interface, which NATs any traffic it gets to their containers based on random ports that Marathon assigns to it.

We use Bamboo with haproxy under the covers, using a pre-baked CentOS 7 AMI that does some light self-discovery to determine how haproxy should be routing traffic. We put a load balancer (or ELB in AWS) in front of multiple haproxy boxes, and then point a wildcard DNS record at the load balancer. Using Bamboo to generate the haproxy config, haproxy looks at the host header in the request which matches the name of the application as far as Marathon sees it, then load balances your traffic to your containers.

Basically, if you have a containerized app called shitbox, and you tell Marathon to deploy three of them, three containers get spun up on Mesos slaves on random ports, Bamboo generates an haproxy listener and backend, and then when you try to make an HTTP call to "shitbox.us-west-2.some.tld", haproxy matches that host header to the three containers and load balances to the three containers on whatever random port they happen to be listening on.

It's pretty tidy, and using Bamboo's haproxy templating you can do some pretty cool stuff based on host headers or uri matching or by extracting environment variables from Marathon. I'll admit I haven't played with Kubernetes at all, but someone else on my team did about 6 months ago and decided it wasn't as good as Mesos + Marathon, whatever that means.

We've been running Docker on Mesos (with CentOS) for a little over a year now with no major issues, aside from some strange docker bugs here and there, and some learning about the perils of using devicemapper-loopback as your docker storage driver. We use OverlayFS now :)

freeasinbeer
Mar 26, 2015

by Fluffdaddy
We're also using Mesos/Marathon mainly because spark has a native scheduler for mesos. I've looked at DC/OS and played with it but would really recommend avoiding it at the moment as it seems like half the packages are broke, it doesn't really support any auto scaling without headache and feels like I am fighting the system to get things running how I expect them to. Now this might partly be my own fault because I had already setup the tooling for autoscaling mesos on my own and have been using Marathon/Mesos before DC/OS was a thing and therefore I dont buy into the mesosphere way of doing things.

Otherwise Kubernantes is what everyone else seems to use, and has better support for persistent data(mesos really lacks this). I've looked at moving to it, but the native spark scheduler is primarily why I deployed mesos in the first place. Otherwise I love Mesos and it makes my life so much easier.

cowboy beepboop
Feb 24, 2001

Wow, thanks for the great responses. Looks like there's heaps of viable options.

evol262
Nov 30, 2010
#!/usr/bin/perl

Punkbob posted:

Otherwise Kubernantes is what everyone else seems to use, and has better support for persistent data(mesos really lacks this). I've looked at moving to it, but the native spark scheduler is primarily why I deployed mesos in the first place. Otherwise I love Mesos and it makes my life so much easier.

You know you can run kubernetes on Mesos?

Mr Shiny Pants
Nov 12, 2012

Cidrick posted:

We're using Mesos + Marathon, aka "DC/OS by hand". The OS powering the Mesos slaves/agents are currently CentOS on-premises, but we're starting to deploy in AWS and using CoreOS for the slaves/agents instead, since it's trivial to spin up and stop a bunch of Mesos slaves using cloud-init. The slaves which are what are running docker only have an eth0 and a docker0 interface, which NATs any traffic it gets to their containers based on random ports that Marathon assigns to it.

We use Bamboo with haproxy under the covers, using a pre-baked CentOS 7 AMI that does some light self-discovery to determine how haproxy should be routing traffic. We put a load balancer (or ELB in AWS) in front of multiple haproxy boxes, and then point a wildcard DNS record at the load balancer. Using Bamboo to generate the haproxy config, haproxy looks at the host header in the request which matches the name of the application as far as Marathon sees it, then load balances your traffic to your containers.

Basically, if you have a containerized app called shitbox, and you tell Marathon to deploy three of them, three containers get spun up on Mesos slaves on random ports, Bamboo generates an haproxy listener and backend, and then when you try to make an HTTP call to "shitbox.us-west-2.some.tld", haproxy matches that host header to the three containers and load balances to the three containers on whatever random port they happen to be listening on.

It's pretty tidy, and using Bamboo's haproxy templating you can do some pretty cool stuff based on host headers or uri matching or by extracting environment variables from Marathon. I'll admit I haven't played with Kubernetes at all, but someone else on my team did about 6 months ago and decided it wasn't as good as Mesos + Marathon, whatever that means.

We've been running Docker on Mesos (with CentOS) for a little over a year now with no major issues, aside from some strange docker bugs here and there, and some learning about the perils of using devicemapper-loopback as your docker storage driver. We use OverlayFS now :)

This is cool, I was wondering how service discovery would work.

Cidrick
Jun 10, 2001

Praise the siamese

Mr Shiny Pants posted:

This is cool, I was wondering how service discovery would work.

We put a docker container with a consul agent on each mesos slave that phone home to join the consul cluster. We run the consul container in host mode so that any other container on the host can query the agent via localhost:8500.

stevewm
May 10, 2005
Anyone have any experience with Scale Computing's HC3 platform? https://www.scalecomputing.com/products/product-overview/

They are one of a few solutions we are considering. Their solution so far is the most compelling...

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


stevewm posted:

Anyone have any experience with Scale Computing's HC3 platform? https://www.scalecomputing.com/products/product-overview/

They are one of a few solutions we are considering. Their solution so far is the most compelling...

That is pretty cool but what's the hypervisor?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Tab8715 posted:

That is pretty cool but what's the hypervisor?
KVM + presumably QEMU with custom management bits on top

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


I'd want to know how all the networking and bits come together but drat that's cool.

What were the other solutions you were looking at?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Looks like just a standard hyperconverged offering, but pitched at smaller customers with a lower cost of entry.

Hi Jinx
Feb 12, 2016
I'm building a workstation/gaming rig and would love some input on the virtualization aspect.

I obviously need Windows (Games, Outlook, Visual Studio), and I really want to use ZFS for storage. So I have two options:

- Linux as the host with KVM (I guess), and pass through the GPUs to a Windows guest. Many examples of this being done successfully, but as far as I know SLI is not possible with GPU passthrough, right? I also won't have a GPU I can dedicate to Linux in this case (no onboard video, and no room to plug one in either); is this an issue?

- Windows 10 as the host, running Linux + ZFS in a VM, with the SATA disks using Hyper-V disk passthrough. Windows can then access storage with SMB, iSCSI, maybe NFS. So far I'm leaning towards this; not sure how much storage performance I'll lose though. I know Samba isn't that great on Linux. No recent experiences with iSCSI or NFS at all.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Hi Jinx posted:

I'm building a workstation/gaming rig and would love some input on the virtualization aspect.

I obviously need Windows (Games, Outlook, Visual Studio), and I really want to use ZFS for storage. So I have two options:

Maybe make your life easier and separate the storage to it's own computer and stick it to some closet. Also forget about SLI, I believe it's usually more trouble than it's worth. Just buy a more powerful GPU.

evol262
Nov 30, 2010
#!/usr/bin/perl

Hi Jinx posted:

I'm building a workstation/gaming rig and would love some input on the virtualization aspect.

I obviously need Windows (Games, Outlook, Visual Studio), and I really want to use ZFS for storage. So I have two options:

- Linux as the host with KVM (I guess), and pass through the GPUs to a Windows guest. Many examples of this being done successfully, but as far as I know SLI is not possible with GPU passthrough, right? I also won't have a GPU I can dedicate to Linux in this case (no onboard video, and no room to plug one in either); is this an issue?

- Windows 10 as the host, running Linux + ZFS in a VM, with the SATA disks using Hyper-V disk passthrough. Windows can then access storage with SMB, iSCSI, maybe NFS. So far I'm leaning towards this; not sure how much storage performance I'll lose though. I know Samba isn't that great on Linux. No recent experiences with iSCSI or NFS at all.

Samba is fine.

This is just the computer version of "jack of all trades, master of none". Or "more money than sense".

Do you need a storage server? What for? Why do you want ZFS?

I'm curious about use case, because you're almost certainly better off just taking the money you'd throw away on SLI and using it to buy a cheap motherboard+cpu+memory for a storage server, then having a normal windows workstation. Run VMs on either one if you really need/want them, but virtualization just so you can have a storage server and a gaming system is :psyduck

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

evol262 posted:

I'm curious about use case, because you're almost certainly better off just taking the money you'd throw away on SLI and using it to buy a cheap motherboard+cpu+memory for a storage server, then having a normal windows workstation. Run VMs on either one if you really need/want them, but virtualization just so you can have a storage server and a gaming system is :psyduck
I agree, why not just seperate them? You can do the storage and virtualization servers with low power parts that are inexpensive.

Hi Jinx
Feb 12, 2016
SLI: I have the two GTX 1080s already. Not sure I can get a single more powerful card. :p The reason is.. why not? I have a 4K monitor so wanting to run games in 4K is pretty understandable. One 1080 does a decent job with most modern titles, two should be fine for steady 60 fps.

ZFS: Self-healing and dedupe. I want dedupe for VM backups / snapshots, and I have enough ECC RAM to do it.

I realize I could do this in two machines, but it'll end up costing more, take up more space, consume more power, produce more heat & noise, etc. I could buy/build a cheap NAS box but it won't give me the speed or the reliability I want, and the rig certainly has the power to spare to also do storage, which is a pretty menial task in a single-user environment.

Evol262 asked about use case: aside from gaming (which is really not the main point) it's meant to let me do software development on multiple platforms, which also involves running 4-5 VMs at a time; mostly for testing & debugging.

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Just buy more storage and forget about needing ZFS. It'll be way less futzing around and get you everything you need (want).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply