|
Studebaker Hawk posted:I have had some nightmare experiences P2V SBS2008. I would do this http://technet.microsoft.com/en-us/library/gg563798.aspx I've looked at it, but quite honestly I'm not sure their "migration" path looks any safer. Although I'd be interested to hear what problems you had; PM me if you don't want to clutter the thread.
|
# ? Mar 27, 2013 19:24 |
|
|
# ? May 9, 2024 23:50 |
|
Does anyone really prefer XenServer over vSphere? I am getting ready to bring up a new file server under XenServer and go to read about expanding the VHD (simple online task within VMware). Looks like XenServer requires me to bring down the VM to do so. Then I start looking at expanding the datastore and it has to be done through command line (at least according to what I read). quote:Issue “xe sr-list” to retrieve all storage repositories. We will need the uuid (long string of letters and numbers example: 6912c302-c4dc-0676-70e5-8d6ff574a61f) and the device name (such as /dev/sdc in my case) in the steps that follow for the LUN we need to extend. Ugh.
|
# ? Mar 28, 2013 15:37 |
|
Moey posted:Does anyone really prefer XenServer over vSphere? The licensing model is the simplest of all the big 3... but everything else is pretty much crap in my eyes. I managed a 2 host Citrix Xen cluster for about 6 months and hated it
|
# ? Mar 28, 2013 15:45 |
|
XenServer is for poors.
|
# ? Mar 28, 2013 17:19 |
|
Ugh. We have a network admin who thinks he is a sysadmin. He managed to pull all of our funding for our server vSphere stuff and spent it all on XenServer. This happened right before I started here. Needless to say I am already finding XenServer to be crap from an admin point of view.
|
# ? Mar 28, 2013 18:18 |
|
Moey posted:Ugh. He should probably be fired.
|
# ? Mar 28, 2013 18:31 |
|
Any recommendations on a managed/hosted solution? We have a need to get around 50 geographically separated thin clients up and running in a super short period of time. We have two vSphere 5.1 servers that we use locally, but we don't currently have the bandwidth to support that many external clients. They would be running Win 7 if that makes any difference. Maybe this is the wrong thread for this, but I'm taking a shot.
|
# ? Mar 28, 2013 19:59 |
|
synthetik posted:Any recommendations on a managed/hosted solution? We have a need to get around 50 geographically separated thin clients up and running in a super short period of time. We have two vSphere 5.1 servers that we use locally, but we don't currently have the bandwidth to support that many external clients. You could run 50 thin clients using RDP off less than the bandwidth provided by a t1. EDIT: Unless you aren't using rdp on these clients or youre doing something other than basic office stuff (word/excel/etc...) Syano fucked around with this message at 20:07 on Mar 28, 2013 |
# ? Mar 28, 2013 20:05 |
|
synthetik posted:Any recommendations on a managed/hosted solution? We have a need to get around 50 geographically separated thin clients up and running in a super short period of time. We have two vSphere 5.1 servers that we use locally, but we don't currently have the bandwidth to support that many external clients. Your average PCoIP should be around 128~256kb (depending on how you tune it), with thin clients you can use client side caching and Multi-Media Redirection(MMR), as well as a bunch of other tuning things you can do in VMware View. What Bandwidth do you have? Dilbert As FUCK fucked around with this message at 20:15 on Mar 28, 2013 |
# ? Mar 28, 2013 20:11 |
|
Syano posted:You could run 50 thin clients using RDP off less than the bandwidth provided by a t1. I'm not using RDP - they would be accessing a (fairly) network traffic heavy application that is hosted elsewhere. This is in the medical field, so think large result sets, x-ray images, and other bandwidth (but not necessarily processor) intensive applications. Corvettefisher posted:Your average PCoIP should be around 128~256kb (depending on how you tune it), with thin clients you can use client side caching and Multi-Media Redirection(MMR), as well as a bunch of other tuning things you can do in VMware View. My main lab is running off a 10 meg internet connection - we have an MPLS to our current external sites. We want to provide access to an external web service that would use a fairly high amount of bandwidth. The locations will all have various levels of network access, we will not be providing the local internet for the sites. We are contractually obligated to provide the system that the client will access the service on, we can't just email them a link and a login. The hosted instances will not need access to my internal network. There are a couple of options that we see: A) Provide physical computers for each remote location. This would be simple, but possibly not the quickest or most cost-efficient. B) Provide a virtualized solution, where we drop-shop thin-clients out to the locations and have them responsible for providing the power/internet hookup. We haven't settled on a solution, we are looking at total cost (hardware, licensing, FTE, bandwidth) before moving forward. edit: To be clear, solution B can have two ways of doing it, I provide my own VDI/vSphere server in a datacenter somewhere, or use an existing company. I am looking at both solutions, I am just unfamiliar with who is providing what. edit 2: Also, we might be just making the whole thing way more complicated than we need to. And this might be the entirely wrong thread for my question. synthetik fucked around with this message at 20:53 on Mar 28, 2013 |
# ? Mar 28, 2013 20:29 |
|
I wouldn't say this is the wrong thread but you might want to consider some things, if you don't have any metrics on what a user is doing/needing, YOU NEED TO. a Physical desktop would probably be your safest answer, seeing how you have a limited time constraint; rushing in to a VDI solution will bring you some heartache unless you have a good idea what the users storage/cpu/memory/network requirements are. If they are dealing with large X-ray images day to day that is going to chew up that 10mb line in no time flat(seeing how loss-less is probably required), even if you do with client side caching, the images still needs transporting. This can cause some serious headaches for end users, and even sometimes make the solution fail all together. If you could take a week to gather some metrics from perfmon on them, that would help you with a decision. If these are doctors hitting X-Rays all day I can almost assure you 25-50 doctors pulling xrays on a 10mb pipe is going to blow up in your face hard. I guess some questions are: How many simultaneous users? What is the average user doing? What is the average number of users per site? What is the latency between sites? There is a way which if the offices are large enough is to deploy a "VDI-IN-A-BOX" solution which computers could have lan +Gb speed and latency, and utilize their own internet, and still get all the benefits of VDI. YOu could also; link those remote "VDI in a box" solutions back to a central vCenter server for management. However this might be over thinking it given the number of users. Dilbert As FUCK fucked around with this message at 21:24 on Mar 28, 2013 |
# ? Mar 28, 2013 21:18 |
|
Corvettefisher posted:I wouldn't say this is the wrong thread but you might want to consider some things, if you don't have any metrics on what a user is doing/needing, YOU NEED TO. a Physical desktop would probably be your safest answer, seeing how you have a limited time constraint; rushing in to a VDI solution will bring you some heartache unless you have a good idea what the users storage/cpu/memory/network requirements are. Application bandwidth is a non-issue. First your questions. How many simultaneous users? One per site max, so 50 total. What is the average user doing? Mostly placing orders and viewing/printing discrete results. Large image downloads would happen in off-hours, rarely on-demand. What is the average number of users per site? One per site. What is the latency between sites? Not an issue as far as I can tell, there isn't any intra-site communication as far as our application is concerned. The systems we would be placing in the field will be accessing a web service. It is externally hosted and has (for all intents) unlimited bandwidth/processing. We just need to quickly obtain, deploy (and support) 50 devices that can connect to the web service. My reservations about throwing desktops at the problem is more on the support side - I don't have the manpower to physically fix problems when they crop up. Terminals seem to be a better choice. Running them through my current network is not going to work, and I understand that. If a download takes too much time the client would be responsible for upgrading their local bandwidth. I was looking for a hosted solution due to the time constraints, I figured it would be easier for an established provider to spin up 50 windows instances than for me to build, test, and deploy a virtual environment.
|
# ? Mar 28, 2013 21:37 |
|
Okay, I misunderstood I thought you were operating on 10mb for 50 users, if that isn't the case virtual desktops would just need to be supplied with ample ram, cpu, and storage which you could test out easily if the doctors are doing similar tasks. You might just want to say Up/Down 2mb or greater required for best picture.
|
# ? Mar 28, 2013 22:35 |
|
three posted:He should probably be fired.
|
# ? Mar 29, 2013 00:09 |
|
I'd love to have seen his business justification for that transition.
|
# ? Mar 29, 2013 00:11 |
|
Martytoof posted:I'd love to have seen his business justification for that transition. It's cheaper and does the same thing!
|
# ? Mar 29, 2013 00:45 |
|
Moey posted:Does anyone really prefer XenServer over vSphere? I use XenServer pretty extensively. I've used Citrix products for quite a while. Their last release, XenServer 6.1 is a pile of poo poo. I would normally recommend them if you could use the money between that and VMware for other things (like a SAN, etc.) XenServer does require you to offline a VM to expand the virtual disk. It can be done through the GUI under Storage, click the disk, Properties. What you've posted is how to expand the datastore. As far as I know, you still have to do that via command line.
|
# ? Mar 29, 2013 00:46 |
|
The worst part is we do not have hard budget limits. Government work.... I see more money wasted than I can imagine. This job is already driving me nuts. gently caress this place
|
# ? Mar 29, 2013 02:55 |
|
Goon Matchmaker posted:It's cheaper and does the same thing! Just run a linux box with Virtualbox and like 20 VMs running on the desktop. It's FREE and what's the difference!! You can even play tuxracer in your downtime.
|
# ? Mar 29, 2013 03:01 |
|
Moey posted:The worst part is we do not have hard budget limits. Government work.... Hey at least you're in Colorado now Actually now that I think of it I am going to be in the mountains this weekend with some buddies, shoot me a PM or something if you'd be interested in getting a beer with a random dude from the internet.
|
# ? Mar 29, 2013 03:10 |
|
Found this really neat article on virtualized Solaris VM to VM network tweaking this morning on hackaday, figured it might be a good read in here: http://blog.cyberexplorer.me/2013/03/improving-vm-to-vm-network-throughput.html
|
# ? Mar 29, 2013 13:35 |
|
Martytoof posted:Just run a linux box with Virtualbox and like 20 VMs running on the desktop. It's FREE and what's the difference!! hi im sargent honks do not forget to run seamless mode
|
# ? Mar 29, 2013 21:48 |
|
So has anyone here deployed a Quadro accelerated VDI environment? Debating buying a 4000 card to test out poo poo on
|
# ? Mar 30, 2013 03:02 |
|
Make work buy it?
|
# ? Mar 30, 2013 03:38 |
|
evil_bunnY posted:Make work buy it?
|
# ? Mar 30, 2013 03:45 |
|
Misogynist posted:He just gave his notice so that's unlikely
|
# ? Mar 30, 2013 04:02 |
|
evil_bunnY posted:The new one... Most likely doing this. However thought I would ask. Also starting up a Vmug here on the east coast loving drat it I really want to respond to the "poo poo that pisses you off thread" and clear up some poo poo loving TOXX Dilbert As FUCK fucked around with this message at 06:05 on Mar 30, 2013 |
# ? Mar 30, 2013 05:19 |
|
Corvettefisher posted:So has anyone here deployed a Quadro accelerated VDI environment? Debating buying a 4000 card to test out poo poo on The only downside to the 4000 is the very limited amount of RAM on board. If you give each VM 256M of VRAM (Video, not virtual), you can only get like 7 VMs on the card. If you are just looking to see how well a certain app behaves in a VM, that will be fine.
|
# ? Mar 30, 2013 06:05 |
|
DevNull posted:The only downside to the 4000 is the very limited amount of RAM on board. If you give each VM 256M of VRAM (Video, not virtual), you can only get like 7 VMs on the card. If you are just looking to see how well a certain app behaves in a VM, that will be fine. Aware of that buddy, but thanks! Mostly looking into performance of CAD programs. Mostly AutoCAD
|
# ? Mar 30, 2013 06:08 |
|
What do you guys think of Openstack? I know a few enterprise companies are using it, but what do you think the possibility for smaller businesses to replace VMware these days?
ghostinmyshell fucked around with this message at 14:59 on Apr 1, 2013 |
# ? Mar 31, 2013 21:12 |
|
Mierdaan posted:VMware Loses More Than $2 Billion in Market Cap on PayPal / Ebay Rumors. That seems so odd, what is the likelihood of VMware being engineered into becoming obsolete?
|
# ? Apr 1, 2013 11:59 |
|
Tab8715 posted:That seems so odd, what is the likelihood of VMware being engineered into becoming obsolete? vSphere itself will be doing just fine as long as they can maintain enough of a technical lead to mean their product's significant licensing cost still delivers a positive return on investment versus all the other hardware you would need with a Hyper-V/KVM/Xen hypervisor.
|
# ? Apr 1, 2013 14:31 |
|
vSphere is a significantly more well-rounded and feature-rich product than its competition, in my opinion. But, you know how many people actually USE all of those features? Not many. There are still people in 2013 that don't have DRS enabled. The war to overtake the hypervisor isn't about being better than ESXi; it's about being "good enough." It won't be long.
|
# ? Apr 1, 2013 14:39 |
|
People also love to build Hyper-V hosts with all kinds of useless stuff (not the core edition, adobe flash/reader). I wish microsoft made it harder to do that, perhaps forcing server core to be used. It makes maintenance a pain in the rear end as there is guaranteed to be an update that requires a reboot each month. (These are small environments usually, no shared storage) sanchez fucked around with this message at 15:10 on Apr 1, 2013 |
# ? Apr 1, 2013 15:07 |
|
sanchez posted:People also love to build Hyper-V hosts with all kinds of useless stuff (not the core edition, adobe flash/reader). I wish microsoft made it harder to do that, perhaps forcing server core to be used. It makes maintenance a pain in the rear end as there is guaranteed to be an update that requires a reboot each month. (These are small environments usually, no shared storage) It makes me sad there are so many people in the profession that are so bad at their job that we need the vendors to limit functionality so people don't do dumb stuff like that.
|
# ? Apr 1, 2013 15:14 |
|
Libertarianism for networks: Admins should have a right to destroy their networks, therefore enabling management to keep the admin on staff plus hire an MSP that actually knows what they are doing, therefore propping up capitalism. Did I do that right? I dunno it sounded funny in my head
|
# ? Apr 1, 2013 15:20 |
|
So I've got 4 HP DL360 G8s with the HP esxi 5.1 iso installed and their 4 port onboard nics seem to randomly drop from 1000 full to 10 full, this is with auto turned on them all. I thought it might be related to the network runs being too close to the power so I moved them, anyone else see behaviour like with in esx 5.1? This is happening randomly across the ports, each are going to different switches as well so its not that.
|
# ? Apr 1, 2013 18:56 |
|
In preparation to our pending VMWare cluster shipment, I'm looking at Dell's reference 3-2-1 configuration for VMWare and there are a few things that don't quite seem right to me. http://i.dell.com/sites/content/business/smb/sb360/en/Documents/r720-321-configuration.pdf Now, some of the things that DO seem right. This is our first iSCSI install and we are intending to keep the iSCSI traffic segregated on the PowerConnects that we are getting with this implementation. So, all that seems correct to me. I can also see where keeping the vMotion traffic confined to that switch configuration could make sense depending on the complexity and load of the infrastructure network. So, I'm likely going to do that. However, that's were things begin to fall down. They may have "Management" access and vMotion on two different IP subnets, but it appears that they have them sharing the same VLAN on the PowerConnect switches. From what I understand, this is a major no-no. vMotion traffic should be completely isolated between the hosts. They have all management traffic confined to the PowerConnect switches and they also don't have the PowerConnect Switches connected to infrastructure switching. This seems like an odd situation because then you can't have any sort of monitoring on those components. So, it seems like I would want to move the management network on to the infrastructure switching so that the hosts are accessible by the rest of the network. At the same time, I would also connect the management interface of the PowerConnect switches to the infrastructure switches so they can be accessed from the rest of the network. That way, we can monitor these components with our infrastructure monitoring. I mean, I can see where is sort of works as a "VMWare in a box" configuration in a simplistic sake. But they could use some clarification on a few things. So, here's how I'm intending to set things up, let me know if there's any problems. 3 R620s (all hosts) 2 PowerConnect Switches. 1 PS6100x On stacked PowerConnects: Two VLANs. One for VMWare iSCSI traffic, one for vMotion Traffic. Each switch also connected to infrastructure switching for management on different subnet. Each R620: 2 Ports for Management (connected to infrastructure switching on own VLAN) 2 Ports for vMotion (connected to PowerConnect switches on own VLAN) 2 Ports for iSCSI (connected to PowerConnect switches on own VLAN) 2 Ports for VM traffic (utilizing VST, connected to infrastructure switching on physical trunk port) PS6100x: 2x management ports connected to infrastructure switching 8x ports connected to PowerConnects on iSCSI vlan vCenter server (separate physical server that we already have): 2 ports for server access (connected to infrastructure switching) 2 ports for VMWare managment (connected to infrastructure switching on VMWare management VLAN)
|
# ? Apr 1, 2013 21:11 |
|
I'm wondering if I'm better off installing Windows XP on my Lenovo T400 and then just installing Linux in Virtualbox and run it full-screen. My thinking is I'd be able to take full advantage of power saving, switchable graphics, etc.
|
# ? Apr 2, 2013 00:47 |
|
|
# ? May 9, 2024 23:50 |
|
whaam posted:So I've got 4 HP DL360 G8s with the HP esxi 5.1 iso installed and their 4 port onboard nics seem to randomly drop from 1000 full to 10 full, this is with auto turned on them all. I thought it might be related to the network runs being too close to the power so I moved them, anyone else see behaviour like with in esx 5.1? This is happening randomly across the ports, each are going to different switches as well so its not that. https://my.vmware.com/web/vmware/details?downloadGroup=HP-ESXI-5.1.0-GA-10SEP2012&productId=285 What are your network adapters, if Broadcom, they might need a firmware update, I have had that issue with Broadcom drivers on Dell/UCS. bull3964 posted:-stuff- What was your licensing level again? So far what you posted looks okay, I'll read it again in the morning long day today. Yeah nothing looks terribly wrong, I would suggest give iscsi a bit more. E: Also started a Website about VMware and stuff any topics people want me to cover? Dilbert As FUCK fucked around with this message at 17:36 on Apr 2, 2013 |
# ? Apr 2, 2013 01:25 |