|
Nah, I don't think what you're trying to do is retarded. I had to do the very same thing when I used to rock a VM for work purposes (proprietary call software that didn't run in anything but IE6 (gently caress that noise). I wanted to run Linux natively). I had issues with the thing still being a bloated 95-100GB VMDK when it could have been easily 20GB. Hence the exact same scenario as you. My process was to defrag (probably not needed) -> Re-import the VM via VMware Converter and temporarily take up even more space, but re-thinning the destination VMDK -> defrag main physical hard disk. This last one was me being spergy. I don't think you want to wait for user laptops/workstations to complete yet another. Note: When I moved to SSDs and basically ran through the process again, the defragging was completely unnecessary, aside from the actual shrinking I wanted to do to the Windows VMDK/partition/filesystem obviously.
|
# ? Nov 28, 2012 01:03 |
|
|
# ? May 13, 2024 08:38 |
|
just use vmware converter, it will solve all that ails ya.
|
# ? Nov 28, 2012 01:15 |
|
I've seen people defrag VMs that lived on copy-on-write SANs. For them too, there is a special place in hell.
|
# ? Nov 28, 2012 02:35 |
|
Ashex posted:Yep, just reclaiming zeroed space is all I'm trying to do. I'm very aware this sounds backwards and kinda silly but it keeps people from complaining to me about how small their laptop drive is. What I've done when I was in a pinch is just create a new disk, use trueimage to copy the system disk to the new disk, then delete the old disk.
|
# ? Nov 28, 2012 03:48 |
|
I need advice on the best way to set up a Photoshop environment in Virtualbox. I have Linux Mint 14 as the host, and Windows 8 Pro as the guest running in Virtualbox. Photoshop CS6 is installed on the guest. My system has 16gb of RAM, a 120gb SSD and a 750gb HDD. I'm running dual monitors, one for Linux and one for Windows/PS, with a lot of moving back and forth. My video card is a Geforce GTX 550 ti. What I'm needing are suggestions on the best configuration to optimize Photoshop performance in this setting. Like, how much RAM should I allot to Virtualboz? Is there a way to set up a virtual scratch disk? Any other Virtualbox/Windows/Photoshop settings I need to worry about to get Photoshop running as quickly and smoothly as possible? Aditionally, is Virtualbox even the right tool for this? Should I consider other VMs?
|
# ? Nov 28, 2012 22:39 |
|
evil_bunnY posted:I've seen people defrag VMs that lived on copy-on-write SANs. For them too, there is a special place in hell.
|
# ? Nov 29, 2012 01:12 |
|
http://www.vmware.com/files/pdf/view/Server-Storage-Sizing-Guide-Windows-7-TN.pdf http://www.vmware.com/files/pdf/VMware-View-OptimizationGuideWindows7-EN.pdf That and a whole lot of other things caiman posted:I need advice on the best way to set up a Photoshop environment in Virtualbox. I have Linux Mint 14 as the host, and Windows 8 Pro as the guest running in Virtualbox. Photoshop CS6 is installed on the guest. My system has 16gb of RAM, a 120gb SSD and a 750gb HDD. I'm running dual monitors, one for Linux and one for Windows/PS, with a lot of moving back and forth. My video card is a Geforce GTX 550 ti. You can try it in VB but really you should look at dual booting depending on how much photoshop you are doing. VB/VMware Workstation can accelerate 3D somewhat but really you would be better, performance wise, just installing windows on a partition or such.
|
# ? Nov 29, 2012 01:52 |
|
I would run w8 on the hardware and Linux virtualized, if only to get your poo poo color-corrected properly.
|
# ? Nov 29, 2012 03:13 |
|
evil_bunnY posted:I would run w8 on the hardware and Linux virtualized, if only to get your poo poo color-corrected properly. What does this mean?
|
# ? Nov 29, 2012 03:27 |
|
I doubt you can calibrate your virtualized display is what I mean.
|
# ? Nov 29, 2012 03:46 |
|
So apparently I'm getting a free upgrade to vCloude Suite Standard. This poo poo looks complicated and I have no clue how I could use it in my environment. Still... free. EDIT: Urgh reading the documentation is like being assaulted by some buzzword spitting monster. Rhymenoserous fucked around with this message at 21:35 on Nov 29, 2012 |
# ? Nov 29, 2012 21:33 |
|
http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=one&id_subject=30854 Free course if you have 3.5hrs to kill
|
# ? Nov 29, 2012 21:35 |
|
Rhymenoserous posted:So apparently I'm getting a free upgrade to vCloude Suite Standard. It's not free if you consider the cost of your support contract is going to go up like $300/socket per year.
|
# ? Nov 29, 2012 21:38 |
|
Rhymenoserous posted:So apparently I'm getting a free upgrade to vCloude Suite Standard.
|
# ? Nov 29, 2012 21:41 |
|
So I'm currently tasked with rolling out a vSphere implementation on a relatively small scale. We're hoping to build something of an IaaS system with the ability for users to create and destroy their own machines as they see fit. It will be used for major testing, but no production work will be necessary. As I understand it, vCloud Director does not have all of the features Lab Manager did, including the LiveLink capability. Is this incorrect, and has it been replaced by something else? Having the ability to quickly share machines between dev/test would be a major timesaver, and something we really require if we're going to make this transition. Further, in terms of hardware specifications for something like 64 concurrent machines, am I aiming too low by buying Dell tower server hardware in the $12k-15k range? I was thinking just one dedicated storage machine, with a computing machine for the processing. Just a rough estimate, and I can give more concrete numbers if you guys would have more insight then. I'll be picking up some more reading material on vSphere this weekend and getting my knowledge of this up-to-date, but I was hoping to get just some preliminary response before jumping into anything.
|
# ? Dec 1, 2012 00:44 |
|
ahh gently caress not right now answer in the morning
Dilbert As FUCK fucked around with this message at 02:16 on Dec 1, 2012 |
# ? Dec 1, 2012 01:50 |
|
Corvettefisher posted:http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=one&id_subject=30854 Thank you for this link. I was unaware they had such an education section. It looks quite useful. Last week off before I start, seems like as good a way as any to delve into all the software.
|
# ? Dec 1, 2012 21:45 |
|
I've never really played with virtualization until tonight. I realized the Win 8 Pro install I have on my desktop includes HyperV.... I now have an Ubuntu Server VM running -- and it just works. So slick.
|
# ? Dec 2, 2012 03:33 |
|
CanOfMDAmp posted:Further, in terms of hardware specifications for something like 64 concurrent machines, am I aiming too low by buying Dell tower server hardware in the $12k-15k range? I was thinking just one dedicated storage machine, with a computing machine for the processing. Just a rough estimate, and I can give more concrete numbers if you guys would have more insight then. 64 machines of what size? Running only on one server? Do you want to leverage HA/DRS for this testing cluster? Letting users create/destroy their own at the own will can become a dangerous thing. It's very doable, but you need to make sure you create a good process or else they will thrash the system.
|
# ? Dec 2, 2012 05:35 |
|
CanOfMDAmp posted:So I'm currently tasked with rolling out a vSphere implementation on a relatively small scale. We're hoping to build something of an IaaS system with the ability for users to create and destroy their own machines as they see fit. It will be used for major testing, but no production work will be necessary. As I understand it, vCloud Director does not have all of the features Lab Manager did, including the LiveLink capability. Is this incorrect, and has it been replaced by something else? Having the ability to quickly share machines between dev/test would be a major timesaver, and something we really require if we're going to make this transition.
|
# ? Dec 2, 2012 19:46 |
|
How hard is it to re-mount a lun/vm and move it from one cluster to another. Basically, I'd love to be able to have a 3 machine cluster for this Infrastructure app we have but the oracle license per core is just loving killing me. If I separate the cluster and remove the ability to run it on other machines, I would only have to pay for 1 box and would be covered for up to 10 days a year of disaster. 16 cores at ~$25k a core is $400k - if you put that on 3 boxes, now you're looking at effectively paying $800k for vMotion/HA. gently caress larry Ellison.
|
# ? Dec 3, 2012 04:12 |
|
KennyG posted:How hard is it to re-mount a lun/vm and move it from one cluster to another.
|
# ? Dec 3, 2012 04:56 |
|
KennyG posted:If I separate the cluster and remove the ability to run it on other machines
|
# ? Dec 3, 2012 05:24 |
|
Medpak posted:64 machines of what size? Running only on one server? Do you want to leverage HA/DRS for this testing cluster? Letting users create/destroy their own at the own will can become a dangerous thing. It's very doable, but you need to make sure you create a good process or else they will thrash the system. Machine size is relatively small, just running various OSes and a few small apps to ensure code changes work. DR is going to be a backup of the base images + VMware configuration, and will be handled manually. I wasn't hoping to see people questioning having end users working on their own, as I really hope this can be something of a "set and forget" type deal that will only take minimal maintenance and updating for new images and such. In some initial playing around I did with Lab Manager, it seemed perfect for having users unfamiliar with virtualization be able to clone their own copy of various pre-made VMs to work, and have the garbage collector deal with the mess after the timeouts set in the configuration. And trust me, while I realize VMware products take usually a bit more education than some web forums and a couple books, I've been given this opportunity as a learning experience to get a firm foothold on VM tech. Our current system is a bunch of desktops running different configurations with people RDCing into them, so anything that can take a number of desktops and make them virtual is a massive money saver. EDIT: about users, we're currently operating with several teams of 5-6 people each (total comes to around 50-60) on a scheduling system where people check out time on the physical boxes we have. I'm not worried about teaching a new process, so I have pretty free reign to make them work as I need them to. CanOfMDAmp fucked around with this message at 08:59 on Dec 3, 2012 |
# ? Dec 3, 2012 08:56 |
|
Well was going to take a week or two break from posting but aww yeah just got confirmation for going to VMware's partner Exchange 2013!
|
# ? Dec 3, 2012 16:45 |
|
KennyG posted:16 cores at ~$25k a core is $400k - if you put that on 3 boxes, now you're looking at effectively paying $800k for vMotion/HA. gently caress larry Ellison.
|
# ? Dec 3, 2012 17:03 |
|
Is it just me, or is installing vCenter 5.1 a million times more complicated than previous versions? I had to restart installation like 5 times, each time after running up against some fiddly SQL setting that was set wrong. The best part is that most of the errors had nothing to do at all with the actual problem.
|
# ? Dec 3, 2012 20:48 |
|
I did a fresh install on a new server last week that was smooth. I think the quick setup or whatever it's called failed, but installing each component (SSO, then vcenter etc) worked just fine. I was using the bundled SQL express db.
|
# ? Dec 3, 2012 20:59 |
|
No it has been made more complex in 5.1, what verison of SQL are you using? The scripts do make it a bit easier though. Fake E: Sometimes with the Easy Install you'll have to wait for the services to start as they are slow to come up first time, this is especially the case if vCenter has <2GB ram.
|
# ? Dec 3, 2012 21:00 |
|
The method to install SSO with an external database is ridiculously stupid. VMware should be ashamed they shipped it that way.
|
# ? Dec 3, 2012 21:28 |
|
Separate MSSQL 2008 R2. I did a simple install, but all problems were SQL related. I ran the SSO table spaces script, but decided I didn't want the SSO database called "RSA" because that's very non-descriptive. But, turns out you can't change the database name. Then, I forgot to make a 64-bit DSN for vCenter, and after doing so, it wouldn't recognize it until after I quit and reran the installer. THEN, it threw an error that the SQL user I gave didn't have the right permissions on the vCenter database, even though it did. Turns out it also needed to be dbo on the msdb database to create agent jobs, but the error didn't say that. The main problem is insufficient error descriptions. three posted:The method to install SSO with an external database is ridiculously stupid. VMware should be ashamed they shipped it that way. edit2: Also bravo on the lovely new web client requiring a separate VM with 4 cores to serve it. edit3: And bravo on pushing this god drat web client and not having an Update Manager plugin for it Erwin fucked around with this message at 23:13 on Dec 4, 2012 |
# ? Dec 3, 2012 21:30 |
|
Does anyone have any experience using USB NICs with VMware? I need to add another DMZ connection to a host that is maxed out on NICs and expansion cards. I would never consider it for vMotion, iSCSI storage, or the main LAN, but since our internet is only 10Mbps fiber, I don't see it becoming a big issue. Edit: Never mind, looks like that won't even be supported. Now I have to decide between dropping a LAN connection, or a vMotion connection. itskage fucked around with this message at 17:31 on Dec 5, 2012 |
# ? Dec 5, 2012 17:27 |
|
itskage posted:Does anyone have any experience using USB NICs with VMware? I need to add another DMZ connection to a host that is maxed out on NICs and expansion cards. I would never consider it for vMotion, iSCSI storage, or the main LAN, but since our internet is only 10Mbps fiber, I don't see it becoming a big issue. Have you though about segregating that traffic with VLANs?
|
# ? Dec 5, 2012 17:52 |
|
We're setting up storage replication between our primary and secondary NetApp units for a DR plan. Everything is in NFS volumes, so the plan is to replicate the changes nightly when activity is low. If the building burns, we mount up the the volumes on the backup hosts, import the VMs, and get back online in a couple hours. The question I have is should I be concerned with trying to quiesce traffic before replication kicks off? The NetApp units generates a volume delta while the replication is happening so you're moving stable data. My assumption is that the state of the VMDKs as I bring them up (in the hopefully non-existent occasion that I actually have to do this) is that they will just think they had a hard crash at the time of replication, and everything we run including databases seems pretty resilient to hard crashes these days. Sure, in the case of databases there is going to be a little data loss because the log marker hasn't incremented after a little bit of data was written out. But that's maybe a few seconds worth of data and we're going to be doing 12 hour replication schedules which means we're going to be losing on average 6 hours worth of stateful data anyhow. Is my gut right on this or do I have my head up my own rear end and really need to get the traffic quiesced with the NetApp VMware plugins?
|
# ? Dec 5, 2012 22:38 |
|
Moey posted:Have you though about segregating that traffic with VLANs? Yeah, good lord. Tag your traffic down to the host and just set up different virtual networks for each traffic tag.
|
# ? Dec 5, 2012 22:41 |
|
If it's all VM's why not SRM?
|
# ? Dec 5, 2012 22:44 |
|
itskage posted:Does anyone have any experience using USB NICs with VMware? I need to add another DMZ connection to a host that is maxed out on NICs and expansion cards. I would never consider it for vMotion, iSCSI storage, or the main LAN, but since our internet is only 10Mbps fiber, I don't see it becoming a big issue. How* are you currently utilizing for your NICs? What is the avg. nic count per host? BangersInMyKnickers posted:We're setting up storage replication between our primary and secondary NetApp units for a DR plan. Everything is in NFS volumes, so the plan is to replicate the changes nightly when activity is low. If the building burns, we mount up the the volumes on the backup hosts, import the VMs, and get back online in a couple hours. Like Evil bunny said SRM might be something to look into If you aren't familiar with it here are some free courses to help http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=one&id_subject=31255 http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=one&id_subject=39993 Dilbert As FUCK fucked around with this message at 23:02 on Dec 5, 2012 |
# ? Dec 5, 2012 22:58 |
|
evil_bunnY posted:If it's all VM's why not SRM? Unless something changed recently, it is way outside our budget. e: There are also a few legacy non-VM iSCSI luns and CIFS volumes hanging around that need to be replicated. I'm not sure if I will be able to ever fully get rid of them so if I can do all my replication at the storage appliance level that seems easier. We've already paid for the licensing there. BangersInMyKnickers fucked around with this message at 00:31 on Dec 6, 2012 |
# ? Dec 6, 2012 00:29 |
|
BangersInMyKnickers posted:We're setting up storage replication between our primary and secondary NetApp units for a DR plan. Everything is in NFS volumes, so the plan is to replicate the changes nightly when activity is low. If the building burns, we mount up the the volumes on the backup hosts, import the VMs, and get back online in a couple hours. Don't quiesce the data. You're just going to have awful performance during the VMware snapshot creation/deletion and it doesn't buy you anything at all. Let NetApp take the snapshots at will (via VSC or a snapvault schedule) and then replicate it like that. You get a 'crash-consistent' backup that is going to work. Can you remember the last time that a VM failed to come up after you did a hard power/reset on it? The answer is 'never'.
|
# ? Dec 6, 2012 01:53 |
|
|
# ? May 13, 2024 08:38 |
|
madsushi posted:Let NetApp take the snapshots at will (via VSC or a snapvault schedule) and then replicate it like that. You get a 'crash-consistent' backup that is going to work. Can you remember the last time that a VM failed to come up after you did a hard power/reset on it? The answer is 'never'.
|
# ? Dec 6, 2012 04:50 |