|
The steamlink will work great as long as you're just playing games. Alternatively, you can just stream to a steam client on Linux and change your monitor's input (this is nice anyway if you're playing an FPS or something where the input latency really matters). As long as your system does VT-d and the IOMMU groups check out, you're good to go.
|
# ? Jan 3, 2016 23:46 |
|
|
# ? May 8, 2024 05:37 |
|
Combat Pretzel posted:So I've been watching this video: https://www.youtube.com/watch?v=LXOaCkbt4lI This isn't really too impressive since it is just pass though on the graphics cards. Virtualization of the GPU is not nearly that good. It works for older games, and basic desktop stuff, but not high performance stuff. evol262 posted:The short answer is "yes". I can't be hosed watching some video, and scrolling through it doesn't give any indication of whether that's GPU virtualization or just passthrough. Passthrough works fine in Linux, including with GRID cards/etc. The problem in graphics virtualization is that it is mostly being driven by NVIDIA and their stuff is honestly crap. I don't have hopes for good virtualized graphics until Intel decides it is worth the money.
|
# ? Jan 4, 2016 01:00 |
|
DevNull posted:The problem in graphics virtualization is that it is mostly being driven by NVIDIA and their stuff is honestly crap. I don't have hopes for good virtualized graphics until Intel decides it is worth the money. I haven't touched AMD's stuff yet, but at least it's not all done in the driver like nvidia, so hoping it's better...
|
# ? Jan 4, 2016 01:44 |
|
DevNull posted:I don't have hopes for good virtualized graphics until Intel decides it is worth the money.
|
# ? Jan 4, 2016 02:34 |
|
Combat Pretzel posted:Intel is doing something in regards to graphics "virtualization", which seems to amount to more or less of hooking up the guest driver with the host one over the VM bus, if I understood it correctly. But it doesn't appear to be vendor-neutral stuff, and as we know, NVIDIA's not going to follow anyhow. Intel GVT or whatever it was. It's basically the same approach nvidia has taken. It works on Iris Pro with Xen and KVM. It could probably work with Hyper-v, too, if Intel adds it to the Windows driver (and if Microsoft lets them add bits to hyper-v). AMD's approach is by far the cleanest, but it's not possible to map multiple guests to one physical non-IOV card without driver support. Intel will hopefully follow AMD in the future (5th gen GPUs). nvidia will grudgingly give it lackluster support and throw money into making it look bad compared to their proprietary crap, which they'll make sure only works on quadros, ridiculous gamer cards, and Tesla/grid cards (AMD and Intel are both also gating this on "enterprise/premium" hardware now, but are much better about trickling down once the cost of research has been paid down)
|
# ? Jan 4, 2016 03:27 |
|
I hope that AMD adds/doesn't block the feature on consumer cards. I think when upgrading that I'll take the plunge on a whole VM system for shits and giggles, tossing the card between Windows and Linux guests. Hopefully someone figures out how to move devices from and to the host on mid term. Maybe some interesting things to be tried when there's a permanent VM host underneath, too. Seems like a cheaper way to run everything from the NAS, at least cheaper than any host bus adapters.
|
# ? Jan 4, 2016 17:40 |
|
We're about to order our brand new infrastructure and I was wondering if you guys can take a look at it real quick and see if this all looks right (from a networking perspective). Assume all SFP+ links are 10G. edit: the vmware teaming links are going to be in failover mode rather than teaming. kiwid fucked around with this message at 16:10 on Jan 7, 2016 |
# ? Jan 7, 2016 16:07 |
|
kiwid posted:We're about to order our brand new infrastructure and I was wondering if you guys can take a look at it real quick and see if this all looks right (from a networking perspective). Are the two switches on the core side in the same stack or standalone? If they're standalone then you're going to have to provide a lot more detail about the Layer 2/3 configuration if you want anyone to validate it from a networking perspective. Also I'd be wary about using the Cisco SMB switches as they aren't exactly scalable (Also they've got weird specs, apparently they can only do jumbo frames on 10/100/1000 ports).
|
# ? Jan 7, 2016 16:34 |
|
cheese-cube posted:Are the two switches on the core side in the same stack or standalone? If they're standalone then you're going to have to provide a lot more detail about the Layer 2/3 configuration if you want anyone to validate it from a networking perspective. They are standalone and we'd be doing round-robin MPIO with them. Do you recommend something else? We didn't want to spend a gently caress load of money on the switches if we didn't have to. We aren't too worried about scalability since for the foreseeable future we're just going to be running VMware essentials plus which limits us to 3 hosts anyway. However, I didn't know about the jumbo frame limitation so I'll have to look into that.
|
# ? Jan 7, 2016 16:46 |
|
Strongly considering picking this up for my new virtualization testbed. http://www.ebay.com/itm/HP-Fusion-I...wkAAOSwCQNWewfy Specs: E5-2620 x2, 32GB DDR3 ECC, 1TB 7200RPM HDD. Datasheet: http://www.google.com/url?sa=t&rct=...bpab64iZh7oVtNw This is mostly for learning/studying, but I'll probably end up hosting some game servers on it for general goon enjoyment. I'll be running VSphere 5 with a mix of Windows and Linux hosts. Any thoughts? Edit: There's this one too, but it's more than I want to pay if I don't need the extra space. http://www.ebay.com/itm/640GB-Fusio...5cAAOSwFnFWFWtK KillHour fucked around with this message at 17:00 on Jan 7, 2016 |
# ? Jan 7, 2016 16:54 |
|
kiwid posted:They are standalone and we'd be doing round-robin MPIO with them. Do you recommend something else? We didn't want to spend a gently caress load of money on the switches if we didn't have to. We aren't too worried about scalability since for the foreseeable future we're just going to be running VMware essentials plus which limits us to 3 hosts anyway. However, I didn't know about the jumbo frame limitation so I'll have to look into that. How is the storage going to be presented? iSCSI or NFS?
|
# ? Jan 7, 2016 16:55 |
|
cheese-cube posted:How is the storage going to be presented? iSCSI or NFS? iSCSI both with the nimble and the qnap. edit: Do you have any sources for the jumbo frame limitation I can look at? kiwid fucked around with this message at 16:58 on Jan 7, 2016 |
# ? Jan 7, 2016 16:56 |
|
kiwid posted:iSCSI both with the nimble and the qnap. Hmm it should work I guess with iSCSI provided that each target/initiator IP address is tied to a single interface. Are you purchasing via a VAR? It would be a good idea to get one of their sales engineers or whatnot to eyeball the design. Re jumbo frames, I just noticed it when skimming this datasheet which has the following line: http://www.cisco.com/c/en/us/products/collateral/switches/small-business-500-series-stackable-managed-switches/c78-695646_data_sheet.html quote:Frame sizes up to 9K (9216) bytes. Supported on 10/100 and Gigabit Ethernet interfaces. The default MTU is 2K. Just thought it was weird how they specifically mention only 10/100/1000. Maybe the hardware can't switch 9K frames at 10G or something. Maybe ask over in the Cisco.
|
# ? Jan 7, 2016 17:09 |
|
cheese-cube posted:Hmm it should work I guess with iSCSI provided that each target/initiator IP address is tied to a single interface. Are you purchasing via a VAR? It would be a good idea to get one of their sales engineers or whatnot to eyeball the design. Great, thanks for your help. Yes it's through a VAR and they say it's solid as they have other SMBs running a similar setup. edit: I started raising these questions with our VAR so they're setting us up with calls to Cisco professionals so we'll see where it goes. kiwid fucked around with this message at 17:30 on Jan 7, 2016 |
# ? Jan 7, 2016 17:16 |
|
kiwid posted:We're about to order our brand new infrastructure and I was wondering if you guys can take a look at it real quick and see if this all looks right (from a networking perspective). I bought one of those SG500 10g switches for a temporary situation and it was an unmitigated disaster. I wouldn't wish them on my worst enemy. If you're looking for a cheaper solution than a pair of 4500-Xs you can do Nexus 3k. You'll be way better off.
|
# ? Jan 7, 2016 18:26 |
|
I just installed my first VM. I used an off the shelf machine at work and installed virtual box with Windows 10 evaluation. Once I had that installed and patched I put that VM into a Truecrypt 7.1a container. In the VM, I installed my PIA VPN and Steam. I connected to an unsecured guest network from a neighboring office with my VPN on, went to efukt.com and sent a friend a video of a woman with an extremely large vagina through steam chat. I re encrypted the container and the machine is now currently imaging a fresh install of Windows. All in all, an easy process. I like VM's now!
|
# ? Jan 7, 2016 19:14 |
|
THE DOG HOUSE posted:I just installed my first VM. I used an off the shelf machine at work and installed virtual box with Windows 10 evaluation. Once I had that installed and patched I put that VM into a Truecrypt 7.1a container. In the VM, I installed my PIA VPN and Steam. I connected to an unsecured guest network from a neighboring office with my VPN on, went to efukt.com and sent a friend a video of a woman with an extremely large vagina through steam chat. I re encrypted the container and the machine is now currently imaging a fresh install of Windows. My office. NOW!
|
# ? Jan 7, 2016 19:16 |
|
KillHour posted:Strongly considering picking this up for my new virtualization testbed. Ended up pulling the trigger on this. I'm sure it will be more than fine for a home lab. Time to drag my rack out of the garage and into the basement!
|
# ? Jan 8, 2016 00:09 |
|
kiwid posted:edit: Do you have any sources for the jumbo frame limitation I can look at?
|
# ? Jan 8, 2016 01:20 |
|
Jumbo frames doesn't really improve performance in any direct way on a low latency network, which your storage should be. He throughput increase negligible. The main benefit of jumbo is less overhead processing network communication on the source and destination, as well as along the data path. Fewer frames means fewer sets of headers to unpack and decisions to make for things like load balancing, MAC address table lookups, filtering rules, etc.
|
# ? Jan 8, 2016 03:56 |
|
So, Are Jumbo Frames just for convenience sake as in it's just something extra or is there ever a circumstance where you'd really need it enabled? Ex: Buying an i7 over an i5. Gucci Loafers fucked around with this message at 04:13 on Jan 8, 2016 |
# ? Jan 8, 2016 04:05 |
|
Tab8715 posted:Are Jumbo Frames just for convenience sake as in it's just something extra or is there ever a circumstance where you'd really need it enabled?
|
# ? Jan 8, 2016 04:38 |
|
Tab8715 posted:Are Jumbo Frames just for convenience sake as in it's just something extra or is there ever a circumstance where you'd really need it enabled? Like adorai pointed out, this is only useful on systems where throughput is paramount, like storage networks used for very high-throughput batch processing. When you saturate any network, you will see interactivity suffer. The disadvantage is that since your packets are larger, they take longer to deliver, which can mess with interactivity for UDP streams that don't wait for large chunks before acting on streamed data. This is especially noticeable in VoIP applications, but you can see it in other low-latency environments like certain kinds of games, etc.
|
# ? Jan 8, 2016 04:53 |
|
The other disadvantage, as mentioned, is manageability. It's really easy to gently caress up MTU settings somewhere along the line once you decide to change the default. And MTU mismatches can manifest in really bizarre ways. The right answer is usually to benchmark things with standard and jumbo, and unless your workload actually benefits from them, leave it alone. Sometimes your hand is forced, though. We were having perf issues recently on an Equallogic device and Dell wouldn't give me the time of day until we turned on jumbo frames because it's their only supported config.
|
# ? Jan 8, 2016 05:04 |
|
Q, have you considered a converged architecture solution? vSan managing a bay of SSDs attached to a set of hypervisors by a common storage midplane has enabled positively stupid iops in my HPC cluster without the cost of storage vendor ssd markup.
|
# ? Jan 8, 2016 05:06 |
|
Docjowles posted:The other disadvantage, as mentioned, is manageability. It's really easy to gently caress up MTU settings somewhere along the line once you decide to change the default. And MTU mismatches can manifest in really bizarre ways.
|
# ? Jan 8, 2016 05:23 |
|
Potato Salad posted:Q, have you considered a converged architecture solution? vSan managing a bay of SSDs attached to a set of hypervisors by a common storage midplane has enabled positively stupid iops in my HPC cluster without the cost of storage vendor ssd markup. MTU matters outside of storage in memory or compute-bound situations, or where you have an in-memory cluster (or mostly, redis, riak, or whatever) serving a shitload of data which never touches backend storage, etc. Less of a traditional virt situation, but still relevant
|
# ? Jan 8, 2016 06:06 |
|
Potato Salad posted:Q, have you considered a converged architecture solution? vSan managing a bay of SSDs attached to a set of hypervisors by a common storage midplane has enabled positively stupid iops in my HPC cluster without the cost of storage vendor ssd markup. If you're talking about VMware vSAN, the tradeoff is that the licensing is as expensive or more than buying dedicated hardware from a vendor. Also, the markup on drives covers significant QA and firmware development and the drives are generally dual ported to allow for redundant connections to each drive. They aren't off the shelf SSD drives.
|
# ? Jan 8, 2016 07:48 |
|
Just a word of warning for folks thinking of trying out Windows Server 2016 - Hyper-V in 2016 requires Second Level Address Translation (SLAT). SLAT is only available on the i-series (i3, i5, i7) and newer Xeons. It is not supported on the Core 2 series of CPUs. You can't install the Hyper-V role on a Core 2 Duo/Quad computer. I am really burned up about this because I didn't know about this until I put together a Core 2 Duo box and tried installing Server 2016 TP4. At this point I am exploring the possibility of running Windows containers as opposed to Hyper-V containers. This should be awful/interesting. HPL fucked around with this message at 00:11 on Jan 10, 2016 |
# ? Jan 9, 2016 23:49 |
|
HPL posted:Just a word of warning for folks thinking of trying out Windows Server 2016 - Hyper-V in 2016 requires Second Level Address Translation (SLAT). SLAT is only available on the i-series (i3, i5, i7) and newer Xeons. It is not supported on the Core 2 series of CPUs. You can't install the Hyper-V role on a Core 2 Duo/Quad computer. I am really burned up about this because I didn't know about this until I put together a Core 2 Duo box and tried installing Server 2016 TP4.
|
# ? Jan 10, 2016 01:46 |
|
wyoak posted:Good to know but why are you building boxes with 8 year old processors? Because I already have an esxi box and I'm a poor student right now and I don't want to go out and spend a lot of money on something that I'm going to end up burning to the ground when the final Server 2016 comes out. Besides, a Core 2 Duo E8400 with 12GB of RAM should in theory have enough oomph to run Server 2016 and some VMs, especially since all I want to do is mess around with nano servers and containers. Incidentally, in my very short time with Server 2016 thus far, it boots up damned quick. On my "antiquated" hardware, it does two loops of the dotted circle and then it's ready to go. Non-Hyper-V containers only take a few seconds to fire up. It uses the Windows 10-style interface and there isn't all the fluffy non-essential crap on it like I heard was in earlier previews. The start menu has nice sensible admin-related programs pinned to it. All in all, if you're already familiar with 2012 R2, 2016 is going to feel pretty familiar. EDIT: Oh god, working on things that are not properly documented is sucking horribly. HPL fucked around with this message at 02:29 on Jan 10, 2016 |
# ? Jan 10, 2016 02:05 |
|
Well, trying to build a test network on Server 2016 without Hyper-V was about as fun as whacking my hand with a sledgehammer. Time to work on adding Xenserver to the list of hypervisors I've tried. I'll be back though, I'll be back. Server 2016 hasn't seen the last of me.
|
# ? Jan 10, 2016 07:17 |
|
With KVM, any best practises in regards to resource allocation? If I am to run a single VM on top fulltime, can I allocate all cores or do I need to leave one for the host, so it doesn't bog down handling IO?
|
# ? Jan 10, 2016 16:07 |
|
You can assign everything. The host won't bog down on IO, though interactive applications on the host (in a desktop setting) may not perform well if all the cores are pegged
|
# ? Jan 10, 2016 16:21 |
|
OK, cool. Once everything's set up as I want, the Linux KVM host will be just there to deal with the bullshit IO arrangement between it and the NAS.
|
# ? Jan 10, 2016 17:14 |
|
Well, tried Xenserver and it was okay, not really ringing my bells though. Not anything compelling to make one switch over from ESXi. If anything, it's much fiddlier than ESXi, especially with how it deals with storage. I gave Server 2016 another throw and tried running VirtualBox on it. Kitematic worked well for running containers, but it gave containers IP addresses in the 192.168.99.0 subnet which was kind of dumb. I've never been a fan of VirtualBox's networking. Anyways, burned that to the ground and gave Proxmox a go. I've only been on it for an hour or two but I'm already warming up to it. Containers are super easy since dealing with it is an integral part of the the hypervisor as opposed to a duct taped kludge in a VM. Console in Proxmox works well. Nice and snappy, good display quality. Still looking forward to the final version of Server 2016 and getting some decent (SLAT/Hyper-V capable) hardware. Server 2016 seems like it might actually make a decent daily driver OS. Nano servers are kind of a waste at this point since there's not much functionality happening with them (only certain roles can be used with Nano servers) and they're a pain in the butt to deal with since you have to use djoin to get them joined on your domain. Windows containers work much better than Nano servers since there's much less loving around to get them going, they start up fast and have less overhead. It'll be interesting to see how things develop on that front. If Microsoft can come up with some nifty tools to make managing and networking containers easy, they'll cover any ground lost from being late to the game in no time. EDIT: Oh cool, I got SolyDK to install on Proxmox. I couldn't even do that in Hyper-V or bare metal. HPL fucked around with this message at 09:43 on Jan 11, 2016 |
# ? Jan 11, 2016 09:28 |
|
The company I work for is developing a series of integration drivers for the government and we are having difficulty testing the one for Solarwinds NPM because of a lack of a dedicated test environment that we can fiddle with at will. Surprisingly people frown at the thought of unplugging various network devices and bring other peoples work to a halt in the name of TESTING! Does anyone know if its possible to deploy a virtual solution to our particular issue? Or can they point me to some instructions on how to configure such a solution? The thwack community is being decidedly unhelpful as they are convinced I just want to create a node in Solarwinds and trying to convince them otherwise is like banging my head against a brick wall.
|
# ? Jan 12, 2016 17:08 |
|
friendbot2000 posted:The company I work for is developing a series of integration drivers for the government and we are having difficulty testing the one for Solarwinds NPM because of a lack of a dedicated test environment that we can fiddle with at will. Surprisingly people frown at the thought of unplugging various network devices and bring other peoples work to a halt in the name of TESTING! Does anyone know if its possible to deploy a virtual solution to our particular issue? Or can they point me to some instructions on how to configure such a solution? The thwack community is being decidedly unhelpful as they are convinced I just want to create a node in Solarwinds and trying to convince them otherwise is like banging my head against a brick wall. SNMPSim?
|
# ? Jan 12, 2016 17:11 |
|
After spending more time with Proxmox, I have really come to appreciate it. It is very easy to use, works well with almost any operating system and you can install a desktop on the host machine and manage your VMs via the web GUI right on the host itself. It's not a hypervisor I would use for learning virtualization as it is not one of the big boys, but if you have a spare computer sitting around and want to run VMs and still be able to use the computer itself, it's fantastic. The only thing to watch out for is that if you make a bootable USB drive from the ISO, it may not install, since it may hang while trying to look for a cdrom drive (of all things). If that happens, go check the proxmox wiki and they list a couple of programs to try to create your USB drive.
|
# ? Jan 13, 2016 07:34 |
|
|
# ? May 8, 2024 05:37 |
|
Is there an issue with VMware's KB site? Every KB I pull up takes me to an error page. I swear, I only need to consult the VMware site like every other month, but every single time I do I run into some problem. I think I'm just the unluckiest guy.
|
# ? Jan 13, 2016 16:33 |