|
Nice, I identified the driver binding mistakes I made and can now run Linux on bare metal. In regular use I run Xorg on the secondary big graphics card, and when I need to game, I quit Xorg and pass it through to the Windows VM. Needs more testing and then I make this permanent.
|
# ? Jan 19, 2016 18:16 |
|
|
# ? May 9, 2024 01:24 |
|
Has anyone run ESXi 5.5 with Intel 82599 10g adapters with SR-IOV active? VT-D and IOMMU are on in uefi. I added the module parameter to create the virtual functions and rebooted, but when I attach a VF to a guest it won't pass traffic. The packet counters on the interface in the guest never change. (RHEL 6.4 guest OS, Lenovo x3650 M5 chassis) The adapter works fine if I remove the VF parameters, reboot, and create a vswitch that uses it so I know the external plumbing is correct. Do I need to grab current Intel drivers for this card for 5.5 ?
|
# ? Jan 19, 2016 21:16 |
|
Combat Pretzel posted:Nice, I identified the driver binding mistakes I made and can now run Linux on bare metal. In regular use I run Xorg on the secondary big graphics card, and when I need to game, I quit Xorg and pass it through to the Windows VM. Needs more testing and then I make this permanent. fresh_cheese posted:Has anyone run ESXi 5.5 with Intel 82599 10g adapters with SR-IOV active? VT-D and IOMMU are on in uefi. I added the module parameter to create the virtual functions and rebooted, but when I attach a VF to a guest it won't pass traffic. The packet counters on the interface in the guest never change. (RHEL 6.4 guest OS, Lenovo x3650 M5 chassis) Can you see them in lspci on the host console? The virtual functions should also enumerate on the bus. Also check esxcfg-module -g ixgbe
|
# ? Jan 19, 2016 21:25 |
|
evol262 posted:
Yes: Network Controller: Intel Corporation 82599 Ethernet Controller Virtual Function shows up on 0000:10:10.0 through 10.7 Esxcfg-module : ixgbe enabled = 1 options = 'max_vfs=20,20'
|
# ? Jan 19, 2016 22:02 |
|
Does anyone make an drive enclosure with multiple eSATA ports on it? I was thinking that it would be neat to have my hypervisor boxes all running off the same storage space, like a SAN, but the ghetto version. It would (theoretically) be as fast as local storage, but without the insane 10 gbps network costs.
|
# ? Jan 20, 2016 17:29 |
|
You don't need 10gbps (quad port gige intel NICs can be had cheaply, and your lab probably won't need anything more than those can offer), and using a JBOD isn't really worth the time versus building a cheap storage server with a Microserver or whatever, given that the price of an enclosure with multiple eSATA/SAS ports is gonna be close to a cheap avoton build.
|
# ? Jan 20, 2016 18:32 |
|
I looked into it further and it looks like Thunderbolt would be a good way to go for something similar. Tons of bandwidth, simple cabling and somewhat reasonable pricing. Now if only I had the money.
|
# ? Jan 20, 2016 20:17 |
|
Weird interconnects are not worth it unless you already have them laying around and want to use them for fun. Just do what evol262 suggested.
|
# ? Jan 20, 2016 20:41 |
|
Anyone here using Nutanix in production? If so, how do you like it? Are you using AHV or ESCi/Hyper-V as the hypervisor?
|
# ? Jan 20, 2016 21:26 |
|
Internet Explorer posted:Anyone here using Nutanix in production? If so, how do you like it? Are you using AHV or ESCi/Hyper-V as the hypervisor? We've had some customers using it who have had issues with monolithic workloads. The VDI customers seem happier.
|
# ? Jan 21, 2016 01:55 |
|
Funny that should come up now. I was just going to post that I got sucked into the hype and ordered some more lab gear in order to give Nutanix CE a whirl. Worst case scenario, I'll roll back to something more traditional on newer hardware than I started with.
|
# ? Jan 21, 2016 04:17 |
|
So what the heck is Nutanix? I went to the web site and couldn't find my way past the wall of hype.
|
# ? Jan 21, 2016 08:34 |
|
HPL posted:So what the heck is Nutanix? I went to the web site and couldn't find my way past the wall of hype. By default, it's KVM(ish) with some goodies on top. Stuff like automated tiering of data between spindles and SSD, smart placement of workloads, ability to aggregate local storage pools and treat them like shared storage, fault tolerance by replicating n copies of data across the cluster members, and a more shiny happy look and feel than the traditional hypervisors and their management tools. They're basically trying to abstract away the backend stuff (storage, mainly) as much as possible. They're trying to get people to stop buying SANs and horsepower separately and instead buy both in a single box that you can add additional boxes to as necessary. Now, whether any of this actually works in practice, I'm looking forward to reporting back on after the new heavy metal gets here. I've heard their pitch for a while now and kind of wrote it off since I'm a curmudgeon, but it's gaining popularity with a good segment of my customers so I figured it's worth taking a look at.
|
# ? Jan 21, 2016 10:22 |
|
So in other words they can do everything Server 2012 R2 can already do?
|
# ? Jan 21, 2016 15:32 |
|
I will say this for Nutanix; they waited a month after the opening day star wars screening they and dell invited me to until their sales person started calling me every loving day, so that's something.
|
# ? Jan 21, 2016 16:09 |
|
HPL posted:So in other words they can do everything Server 2012 R2 can already do? but it's pretty
|
# ? Jan 21, 2016 16:19 |
|
Still my favorite bit of Nutanix advocacy: http://www.nutanix.com/2013/12/02/nutanix-delivers-a-gpu-platform-that-pays-for-itself-in-bitcoins/
|
# ? Jan 21, 2016 16:29 |
|
HPL posted:So in other words they can do everything Server 2012 R2 can already do? Everybody is doing hyperconverged, but no, proper hyper-v hyperconvergence isn't slated until Server 2016. The value of Nutanix (or dvswitches+vsan, or whatever) can't really be realized until you're doing something other than reading a press release about stuff you might want to put in your lab.
|
# ? Jan 21, 2016 17:18 |
|
The value of Nutanix is that it is scale out (compute and IO capacity can be added incrementally, and in one chunk), shared nothing (node fails you just replace it and it rebuilds), and tightly integrated (no real storage management required, storage is just a function of the platform). It's basically "Datacenter in a box". It's also hypervisor agnostic, so you can run VMware or Hyper-V or their own KVM based hypervisor. YOLOsubmarine fucked around with this message at 18:07 on Jan 21, 2016 |
# ? Jan 21, 2016 18:03 |
|
On the heels of the JBOD/DIY/WHATEVER discussion, anyone here using a Synology in production for datastores? I'm looking at the DS1815 or DS1813. 8 disks, I'll probably put in 7 big drives and one cache SSD. Team the NICs for 4gbps and call it a day? This would be for a home lab with no real SUPER disk intensive stuff -- I have a 1TB DAS RAID for that. Has VAAI support but I'm not sure what if anything that actually affects. Right now I'm playing with setting up FreeNAS on a spare R710 but to be honest I'm finding that I just want something I can plop in and press 3 buttons on and not worry about it for the next 4 years and a DIY solution isn't that. e: Oh wait nevermind, there's a NAS tread for questions just like this. Please disregard. some kinda jackal fucked around with this message at 19:43 on Jan 21, 2016 |
# ? Jan 21, 2016 19:40 |
|
BangersInMyKnickers posted:How are plugins handled? Our NetApp guys told us to stay away because their integration was only on the Windows instance but that was a while ago. That may still be the case. I haven't touched a netapp box in over 4 years. VUM registers with vCenter though and lets you pull down the plugin to the client/installs whatever extensions are required in the web client. NSX manager also works this way.
|
# ? Jan 23, 2016 06:12 |
|
1000101 posted:That may still be the case. I haven't touched a netapp box in over 4 years. VUM registers with vCenter though and lets you pull down the plugin to the client/installs whatever extensions are required in the web client. Yea, that's the way it works for the NetApp plugin. You need a Windows box to run the bits but you can still register it to a VCSA appliance.
|
# ? Jan 23, 2016 06:19 |
|
So how does JBOD affect disk IO? I've got a server I'm playing with and 5 drives with ~150gb each. I'm going to install two exchange servers, two DCs, and maybe two other low impact things on ESXi. In total I might use 350gb of storage. I'm concerned about disk i/o. I see a few options open to me. 1) RAID 5. I don't need any redundancy for what I'm doing but it's an option. I'm sure I'd still have enough storage after the overhead. My disk io concerns are more read than write anyway. 2) JBOD. This is what I'm leaning towards right now but I'm not sure how well the load would be distributed across the disks. If I have a single disk eating all of the exchange IOPs I'm probably gonna have a bad time. 3) I could make two RAID 0 arrays and one regular disk. This could work but could be cutting it pretty close to my storage requirements.
|
# ? Jan 23, 2016 06:25 |
|
jbod performance depends on implementation but will generally scale linearly up with number of disks. I would try to go raid 10 with a hot spare and save 50 GB somewhere.
|
# ? Jan 23, 2016 06:52 |
|
Methanar posted:So how does JBOD affect disk IO? A JBOD is just a bunch of disks in an enclosure. If you're running ESXi on top of it then unless you're using a hardware RAID controller you're going to see them all as individual disks. That means if you want to spread out some IO you're going to have to do it on your own. Since this may be a sandbox that may not be a big deal. If it is, then add 2 virtual disks and do RAID 0 in the guest. Also a reminder that if you lose a disk in RAID 0 you lose everything in that volume. It sounds like this is a sandbox so that may not be a big deal.
|
# ? Jan 23, 2016 20:59 |
|
This only needs to last a few months as a test environment so I'm not worried about a disk failing right now. I'm demoing an on-prem exchange to O365 migration plus a few other Azure-y things. My plan was to run ESXi off of an old USB stick. I do have a hardware raid controller that I was planning using to aggregate my disks together that gives me all of my raid/JBOD options. My main question was that when I turn all my disks into a single logical volume with JBOD, does it just fill up the first disk before moving onto the second with zero concern for trying to distribute IO? I'm going to assume it does or that it could be raid card dependent. I guess I could just not bother aggregating my disks at all and just have 5 different drives available to ESXi. That might just be simplest and best option for me here.
|
# ? Jan 23, 2016 21:39 |
|
Methanar posted:This only needs to last a few months as a test environment so I'm not worried about a disk failing right now. I'm demoing an on-prem exchange to O365 migration plus a few other Azure-y things. Raid 0 denotes striping IO and JBOD generally denotes concatenation. If you want to stripe IO for performance you want to select raid 0 in the hardware raid controller.
|
# ? Jan 23, 2016 21:55 |
|
Pass all the individual disks through to the host and run this? http://www8.hp.com/uk/en/products/data-storage/server-vsa.html
|
# ? Jan 24, 2016 00:27 |
|
Just tried X2Go, a remote desktop protocol thing. It's incredible. It's way faster and smoother than Spice and it beats VNC and regular RDP by a country mile. A pain in the butt to set up and get a connection going, but the results are great. Even on bloated sites like the NY Times, scrolling is smooth and responsive. Unfortunately, It's not a good overall solution because it's not very portable like RDP where you can remote in from just about any computer with minimal fuss, so it's really only useful for situations where you're remoting in to the same place all the time. I'm having fun playing around with LXC containers in Proxmox. I created a command-line Ubuntu container, installed LXDE on it and xrdp and x2go so I can remote into it and browse in Chromium. So far I've used up 1.7GB on the hard drive (it started at around 600MB, blew up by about 200MB just from upgrading from 15.04 to 15.10 and the rest was LXDE and x2go), which seems a little much for what I've got happening, but I guess graphical goodies don't come cheap.
|
# ? Jan 24, 2016 04:21 |
|
Thanks Ants posted:Pass all the individual disks through to the host and run this? The StoreVirtual VSA is pretty slick, as is their StoreOnce VSA. I use the latter as a backup destination, 1TB of raw storage capacity that does dedupe and integrates with Veeam. You can get NFR licenses for both, as well as Veeam Availability Suite for that matter.
|
# ? Jan 24, 2016 05:28 |
|
If write performance isn't really a concern and its not production I would throw them all together in a raid 5 with an okayish controller (at least 128mb of cache) and skip the hot spare.
|
# ? Jan 25, 2016 21:44 |
|
Where can I find O.S. software for Hercules mainframe emulation. I have a vps that I want to use for the vm. The vps is running Ubuntu. I was able to install Hercules and get it to run. I have to found sites that provide instructions for windows, as well as physical installation. However i am unable to install it in my vps via command line. My vps does not support kvm, as it is one of those budget vps. I think I paid $8 for the year. Also is there any other good Jd Edwards or equivalent emulators that are free? I'm trying to get experience in mainframes to help me pad my resume so I can get a better job.
|
# ? Jan 26, 2016 06:05 |
|
You can get a cheap vps from ehvps which supports nested virt. As far as "where do I findz/OS images/software for it?", you can't ask that here
evol262 fucked around with this message at 14:03 on Jan 26, 2016 |
# ? Jan 26, 2016 14:00 |
|
Welp, there goes the last of the good support engineers at VMware: http://fortune.com/2016/01/26/vmware-layoffs-hit/
|
# ? Jan 26, 2016 17:15 |
|
joebuddah posted:Where can I find O.S. software for Hercules mainframe emulation. I have a vps that I want to use for the vm. The vps is running Ubuntu. I was able to install Hercules and get it to run. I have to found sites that provide instructions for windows, as well as physical installation. However i am unable to install it in my vps via command line. My vps does not support kvm, as it is one of those budget vps. I think I paid $8 for the year. If you want an old version of MVS, you can go to http://mvs380.sourceforge.net If you want z/OS for Hercules? There's no way to sugar coat it -- . IBM doesn't make it easy or affordable for hobbyists to touch z/OS. You can buy an ADCD distribution of z/OS but you "need" to use IBM's own hardware emulator package which comes with a hefty price tag. I mean heavy for a hobbyist, not for a professional who needs to work with z/OS. I'm not sure what kvm has to do with the equation however. Shouldn't matter at all. If you have z/OS or MVS CCKDs all you need to do is make sure the Hercules config references them appropriately, make sure your LOADPARM parameters are correct, open a tn3270 emulator and point it at the console defined by Hercules, launch Herc then at the commandline you "ipl xxxx" where xxxx is the address of your ipl volume. If you have instructions for running Hercules under Windows then the Linux equivalent is nearly identical. some kinda jackal fucked around with this message at 18:13 on Jan 26, 2016 |
# ? Jan 26, 2016 18:06 |
|
Martytoof posted:I'm not sure what kvm has to do with the equation however.
|
# ? Jan 26, 2016 18:38 |
|
This probably is starting to stray from Virtualization, but joebuddah if you are looking for experience in mainframe concepts take a look at IBM's "Mastering the Mainframe" contest. It's intended for students to get them interested in the subject but you can ask to audit the material and you can get hands on experience completing the challenges, you just aren't eligible for tshirts and prizes and such. Master the Mainframe: http://www-03.ibm.com/systems/z/education/academic/masterthemainframe/ You're really unlikely to be able to put "mainframe" on your resume as you would Linux. Mainframes in any respectable organization are typically run by a team of people who likely work on different aspects of the system. A linux admin will likely have his hands in everything, but imagine if you split the jobs out into a guy who installs linux, a guy who maintains SElinux and the passwd files, a guy who works on storage, etc. It's possible that with the diminishing amounts of z/OS talent that you'll get people doing more jobs, but the typical setup has been very siloed. That said, I don't want to dissuade you from learning as much about the mainframe as you can. I was in the same boat a few years ago and while I'm most certainly not an expert by any stretch of the imagination I do at least pretend to know how to get around various aspects of MVS and z/OS. Read through this, it's a really good primer on concepts and such: IBM Redbooks: Introduction to The New Mainframe: z/OS Basics: http://www.redbooks.ibm.com/abstracts/sg246366.html Once you have a good 1,000 ft. overview of the system, you can get more details by going through the ABC's of System Programming [vol 1-10] redbooks. They're more in depth. You don't need to read each one, but each one has various topics you may be interested in. At least know where to find them. Bookmark IBM's Redbooks site and visit often http://www.redbooks.ibm.com/abstracts/sg246981.html?Open
|
# ? Jan 27, 2016 02:04 |
|
Thanks for all the information it is really helpful
|
# ? Jan 27, 2016 02:56 |
|
DevNull posted:Welp, there goes the last of the good support engineers at VMware: http://fortune.com/2016/01/26/vmware-layoffs-hit/
|
# ? Jan 27, 2016 04:46 |
|
|
# ? May 9, 2024 01:24 |
|
DevNull posted:Welp, there goes the last of the good support engineers at VMware: http://fortune.com/2016/01/26/vmware-layoffs-hit/ Hopefully you make it through that free and clear.
|
# ? Jan 27, 2016 06:40 |