Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Kachunkachunk
Jun 6, 2011
Nah, I don't think what you're trying to do is retarded. I had to do the very same thing when I used to rock a VM for work purposes (proprietary call software that didn't run in anything but IE6 (gently caress that noise). I wanted to run Linux natively).
I had issues with the thing still being a bloated 95-100GB VMDK when it could have been easily 20GB. Hence the exact same scenario as you.

My process was to defrag (probably not needed) -> Re-import the VM via VMware Converter and temporarily take up even more space, but re-thinning the destination VMDK -> defrag main physical hard disk. This last one was me being spergy. I don't think you want to wait for user laptops/workstations to complete yet another.

Note: When I moved to SSDs and basically ran through the process again, the defragging was completely unnecessary, aside from the actual shrinking I wanted to do to the Windows VMDK/partition/filesystem obviously.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
just use vmware converter, it will solve all that ails ya.

evil_bunnY
Apr 2, 2003

I've seen people defrag VMs that lived on copy-on-write SANs. For them too, there is a special place in hell.

Serfer
Mar 10, 2003

The piss tape is real



Ashex posted:

Yep, just reclaiming zeroed space is all I'm trying to do. I'm very aware this sounds backwards and kinda silly but it keeps people from complaining to me about how small their laptop drive is.

I'll try defragging then re-thinning, I've already shipped the Virtual Appliance so this is more practice for next release than anything else.

What I've done when I was in a pinch is just create a new disk, use trueimage to copy the system disk to the new disk, then delete the old disk.

Spatulater bro!
Aug 19, 2003

Punch! Punch! Punch!

I need advice on the best way to set up a Photoshop environment in Virtualbox. I have Linux Mint 14 as the host, and Windows 8 Pro as the guest running in Virtualbox. Photoshop CS6 is installed on the guest. My system has 16gb of RAM, a 120gb SSD and a 750gb HDD. I'm running dual monitors, one for Linux and one for Windows/PS, with a lot of moving back and forth. My video card is a Geforce GTX 550 ti.

What I'm needing are suggestions on the best configuration to optimize Photoshop performance in this setting. Like, how much RAM should I allot to Virtualboz? Is there a way to set up a virtual scratch disk? Any other Virtualbox/Windows/Photoshop settings I need to worry about to get Photoshop running as quickly and smoothly as possible?

Aditionally, is Virtualbox even the right tool for this? Should I consider other VMs?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

evil_bunnY posted:

I've seen people defrag VMs that lived on copy-on-write SANs. For them too, there is a special place in hell.
I've never seen it, but I've heard of people who deploy VDI solutions and don't turn off any automatic defragging. As I understand it, that is bad.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
http://www.vmware.com/files/pdf/view/Server-Storage-Sizing-Guide-Windows-7-TN.pdf
http://www.vmware.com/files/pdf/VMware-View-OptimizationGuideWindows7-EN.pdf

That and a whole lot of other things

caiman posted:

I need advice on the best way to set up a Photoshop environment in Virtualbox. I have Linux Mint 14 as the host, and Windows 8 Pro as the guest running in Virtualbox. Photoshop CS6 is installed on the guest. My system has 16gb of RAM, a 120gb SSD and a 750gb HDD. I'm running dual monitors, one for Linux and one for Windows/PS, with a lot of moving back and forth. My video card is a Geforce GTX 550 ti.

What I'm needing are suggestions on the best configuration to optimize Photoshop performance in this setting. Like, how much RAM should I allot to Virtualboz? Is there a way to set up a virtual scratch disk? Any other Virtualbox/Windows/Photoshop settings I need to worry about to get Photoshop running as quickly and smoothly as possible?

Aditionally, is Virtualbox even the right tool for this? Should I consider other VMs?

You can try it in VB but really you should look at dual booting depending on how much photoshop you are doing. VB/VMware Workstation can accelerate 3D somewhat but really you would be better, performance wise, just installing windows on a partition or such.

evil_bunnY
Apr 2, 2003

I would run w8 on the hardware and Linux virtualized, if only to get your poo poo color-corrected properly.

Spatulater bro!
Aug 19, 2003

Punch! Punch! Punch!

evil_bunnY posted:

I would run w8 on the hardware and Linux virtualized, if only to get your poo poo color-corrected properly.

What does this mean?

evil_bunnY
Apr 2, 2003

I doubt you can calibrate your virtualized display is what I mean.

Rhymenoserous
May 23, 2008
So apparently I'm getting a free upgrade to vCloude Suite Standard.

This poo poo looks complicated and I have no clue how I could use it in my environment. Still... free.

EDIT: Urgh reading the documentation is like being assaulted by some buzzword spitting monster.

Rhymenoserous fucked around with this message at 21:35 on Nov 29, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=one&id_subject=30854

Free course if you have 3.5hrs to kill

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Rhymenoserous posted:

So apparently I'm getting a free upgrade to vCloude Suite Standard.

This poo poo looks complicated and I have no clue how I could use it in my environment. Still... free.

EDIT: Urgh reading the documentation is like being assaulted by some buzzword spitting monster.

It's not free if you consider the cost of your support contract is going to go up like $300/socket per year.

evil_bunnY
Apr 2, 2003

Rhymenoserous posted:

So apparently I'm getting a free upgrade to vCloude Suite Standard.
you and everyone on ent+

CanOfMDAmp
Nov 15, 2006

Now remember kids, no running, no diving, and no salt on my margaritas.
So I'm currently tasked with rolling out a vSphere implementation on a relatively small scale. We're hoping to build something of an IaaS system with the ability for users to create and destroy their own machines as they see fit. It will be used for major testing, but no production work will be necessary. As I understand it, vCloud Director does not have all of the features Lab Manager did, including the LiveLink capability. Is this incorrect, and has it been replaced by something else? Having the ability to quickly share machines between dev/test would be a major timesaver, and something we really require if we're going to make this transition.

Further, in terms of hardware specifications for something like 64 concurrent machines, am I aiming too low by buying Dell tower server hardware in the $12k-15k range? I was thinking just one dedicated storage machine, with a computing machine for the processing. Just a rough estimate, and I can give more concrete numbers if you guys would have more insight then.

I'll be picking up some more reading material on vSphere this weekend and getting my knowledge of this up-to-date, but I was hoping to get just some preliminary response before jumping into anything.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
ahh gently caress not right now answer in the morning

Dilbert As FUCK fucked around with this message at 02:16 on Dec 1, 2012

talaena
Aug 30, 2003

Danger Mouse! Power House!

Thank you for this link. I was unaware they had such an education section. It looks quite useful. Last week off before I start, seems like as good a way as any to delve into all the software.

Pixelboy
Sep 13, 2005

Now, I know what you're thinking...
I've never really played with virtualization until tonight. I realized the Win 8 Pro install I have on my desktop includes HyperV....

I now have an Ubuntu Server VM running -- and it just works. So slick.

Medpak
Dec 26, 2011

CanOfMDAmp posted:

Further, in terms of hardware specifications for something like 64 concurrent machines, am I aiming too low by buying Dell tower server hardware in the $12k-15k range? I was thinking just one dedicated storage machine, with a computing machine for the processing. Just a rough estimate, and I can give more concrete numbers if you guys would have more insight then.

I'll be picking up some more reading material on vSphere this weekend and getting my knowledge of this up-to-date, but I was hoping to get just some preliminary response before jumping into anything.

64 machines of what size? Running only on one server? Do you want to leverage HA/DRS for this testing cluster? Letting users create/destroy their own at the own will can become a dangerous thing. It's very doable, but you need to make sure you create a good process or else they will thrash the system.

Studebaker Hawk
May 22, 2004

CanOfMDAmp posted:

So I'm currently tasked with rolling out a vSphere implementation on a relatively small scale. We're hoping to build something of an IaaS system with the ability for users to create and destroy their own machines as they see fit. It will be used for major testing, but no production work will be necessary. As I understand it, vCloud Director does not have all of the features Lab Manager did, including the LiveLink capability. Is this incorrect, and has it been replaced by something else? Having the ability to quickly share machines between dev/test would be a major timesaver, and something we really require if we're going to make this transition.

Further, in terms of hardware specifications for something like 64 concurrent machines, am I aiming too low by buying Dell tower server hardware in the $12k-15k range? I was thinking just one dedicated storage machine, with a computing machine for the processing. Just a rough estimate, and I can give more concrete numbers if you guys would have more insight then.

I'll be picking up some more reading material on vSphere this weekend and getting my knowledge of this up-to-date, but I was hoping to get just some preliminary response before jumping into anything.
Vcd can do this. You'll need to provide more information regarding vm specs/utilization for sizing, the number of vms has no bearing in this scenario.

KennyG
Oct 22, 2002
Here to blow my own horn.
How hard is it to re-mount a lun/vm and move it from one cluster to another.

Basically, I'd love to be able to have a 3 machine cluster for this Infrastructure app we have but the oracle license per core is just loving killing me. If I separate the cluster and remove the ability to run it on other machines, I would only have to pay for 1 box and would be covered for up to 10 days a year of disaster.

16 cores at ~$25k a core is $400k - if you put that on 3 boxes, now you're looking at effectively paying $800k for vMotion/HA. gently caress larry Ellison.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

KennyG posted:

How hard is it to re-mount a lun/vm and move it from one cluster to another.
Just add the lun to the second cluster. No need to remove it from the first until after you move your app.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

KennyG posted:

If I separate the cluster and remove the ability to run it on other machines
Wait, why wouldn't you just use host affinity?

CanOfMDAmp
Nov 15, 2006

Now remember kids, no running, no diving, and no salt on my margaritas.

Medpak posted:

64 machines of what size? Running only on one server? Do you want to leverage HA/DRS for this testing cluster? Letting users create/destroy their own at the own will can become a dangerous thing. It's very doable, but you need to make sure you create a good process or else they will thrash the system.

Machine size is relatively small, just running various OSes and a few small apps to ensure code changes work. DR is going to be a backup of the base images + VMware configuration, and will be handled manually. I wasn't hoping to see people questioning having end users working on their own, as I really hope this can be something of a "set and forget" type deal that will only take minimal maintenance and updating for new images and such. In some initial playing around I did with Lab Manager, it seemed perfect for having users unfamiliar with virtualization be able to clone their own copy of various pre-made VMs to work, and have the garbage collector deal with the mess after the timeouts set in the configuration.

And trust me, while I realize VMware products take usually a bit more education than some web forums and a couple books, I've been given this opportunity as a learning experience to get a firm foothold on VM tech. Our current system is a bunch of desktops running different configurations with people RDCing into them, so anything that can take a number of desktops and make them virtual is a massive money saver.

EDIT: about users, we're currently operating with several teams of 5-6 people each (total comes to around 50-60) on a scheduling system where people check out time on the physical boxes we have. I'm not worried about teaching a new process, so I have pretty free reign to make them work as I need them to.

CanOfMDAmp fucked around with this message at 08:59 on Dec 3, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Well was going to take a week or two break from posting but aww yeah just got confirmation for going to VMware's partner Exchange 2013!

:w00t:

evil_bunnY
Apr 2, 2003

KennyG posted:

16 cores at ~$25k a core is $400k - if you put that on 3 boxes, now you're looking at effectively paying $800k for vMotion/HA. gently caress larry Ellison.
Mandatory host affinity.

Erwin
Feb 17, 2006

Is it just me, or is installing vCenter 5.1 a million times more complicated than previous versions? I had to restart installation like 5 times, each time after running up against some fiddly SQL setting that was set wrong. The best part is that most of the errors had nothing to do at all with the actual problem.

sanchez
Feb 26, 2003
I did a fresh install on a new server last week that was smooth. I think the quick setup or whatever it's called failed, but installing each component (SSO, then vcenter etc) worked just fine. I was using the bundled SQL express db.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
No it has been made more complex in 5.1, what verison of SQL are you using?

The scripts do make it a bit easier though.

Fake E: Sometimes with the Easy Install you'll have to wait for the services to start as they are slow to come up first time, this is especially the case if vCenter has <2GB ram.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
The method to install SSO with an external database is ridiculously stupid. VMware should be ashamed they shipped it that way.

Erwin
Feb 17, 2006

Separate MSSQL 2008 R2. I did a simple install, but all problems were SQL related. I ran the SSO table spaces script, but decided I didn't want the SSO database called "RSA" because that's very non-descriptive. But, turns out you can't change the database name. Then, I forgot to make a 64-bit DSN for vCenter, and after doing so, it wouldn't recognize it until after I quit and reran the installer. THEN, it threw an error that the SQL user I gave didn't have the right permissions on the vCenter database, even though it did. Turns out it also needed to be dbo on the msdb database to create agent jobs, but the error didn't say that.

The main problem is insufficient error descriptions.

three posted:

The method to install SSO with an external database is ridiculously stupid. VMware should be ashamed they shipped it that way.
Yeah, it's more convoluted than some of the shittiest open source software I've dealt with.

edit2: Also bravo on the lovely new web client requiring a separate VM with 4 cores to serve it.

edit3: And bravo on pushing this god drat web client and not having an Update Manager plugin for it :wtc:

Erwin fucked around with this message at 23:13 on Dec 4, 2012

itskage
Aug 26, 2003


Does anyone have any experience using USB NICs with VMware? I need to add another DMZ connection to a host that is maxed out on NICs and expansion cards. I would never consider it for vMotion, iSCSI storage, or the main LAN, but since our internet is only 10Mbps fiber, I don't see it becoming a big issue.

Edit: Never mind, looks like that won't even be supported.

Now I have to decide between dropping a LAN connection, or a vMotion connection.

itskage fucked around with this message at 17:31 on Dec 5, 2012

Moey
Oct 22, 2010

I LIKE TO MOVE IT

itskage posted:

Does anyone have any experience using USB NICs with VMware? I need to add another DMZ connection to a host that is maxed out on NICs and expansion cards. I would never consider it for vMotion, iSCSI storage, or the main LAN, but since our internet is only 10Mbps fiber, I don't see it becoming a big issue.

Edit: Never mind, looks like that won't even be supported.

Now I have to decide between dropping a LAN connection, or a vMotion connection.

Have you though about segregating that traffic with VLANs?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

We're setting up storage replication between our primary and secondary NetApp units for a DR plan. Everything is in NFS volumes, so the plan is to replicate the changes nightly when activity is low. If the building burns, we mount up the the volumes on the backup hosts, import the VMs, and get back online in a couple hours.

The question I have is should I be concerned with trying to quiesce traffic before replication kicks off? The NetApp units generates a volume delta while the replication is happening so you're moving stable data. My assumption is that the state of the VMDKs as I bring them up (in the hopefully non-existent occasion that I actually have to do this) is that they will just think they had a hard crash at the time of replication, and everything we run including databases seems pretty resilient to hard crashes these days. Sure, in the case of databases there is going to be a little data loss because the log marker hasn't incremented after a little bit of data was written out. But that's maybe a few seconds worth of data and we're going to be doing 12 hour replication schedules which means we're going to be losing on average 6 hours worth of stateful data anyhow.

Is my gut right on this or do I have my head up my own rear end and really need to get the traffic quiesced with the NetApp VMware plugins?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Moey posted:

Have you though about segregating that traffic with VLANs?

Yeah, good lord. Tag your traffic down to the host and just set up different virtual networks for each traffic tag.

evil_bunnY
Apr 2, 2003

If it's all VM's why not SRM?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

itskage posted:

Does anyone have any experience using USB NICs with VMware? I need to add another DMZ connection to a host that is maxed out on NICs and expansion cards. I would never consider it for vMotion, iSCSI storage, or the main LAN, but since our internet is only 10Mbps fiber, I don't see it becoming a big issue.

Edit: Never mind, looks like that won't even be supported.

Now I have to decide between dropping a LAN connection, or a vMotion connection.

How* are you currently utilizing for your NICs? What is the avg. nic count per host?

BangersInMyKnickers posted:

We're setting up storage replication between our primary and secondary NetApp units for a DR plan. Everything is in NFS volumes, so the plan is to replicate the changes nightly when activity is low. If the building burns, we mount up the the volumes on the backup hosts, import the VMs, and get back online in a couple hours.

The question I have is should I be concerned with trying to quiesce traffic before replication kicks off? The NetApp units generates a volume delta while the replication is happening so you're moving stable data. My assumption is that the state of the VMDKs as I bring them up (in the hopefully non-existent occasion that I actually have to do this) is that they will just think they had a hard crash at the time of replication, and everything we run including databases seems pretty resilient to hard crashes these days. Sure, in the case of databases there is going to be a little data loss because the log marker hasn't incremented after a little bit of data was written out. But that's maybe a few seconds worth of data and we're going to be doing 12 hour replication schedules which means we're going to be losing on average 6 hours worth of stateful data anyhow.

Is my gut right on this or do I have my head up my own rear end and really need to get the traffic quiesced with the NetApp VMware plugins?

Like Evil bunny said SRM might be something to look into
If you aren't familiar with it here are some free courses to help
http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=one&id_subject=31255
http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=one&id_subject=39993

Dilbert As FUCK fucked around with this message at 23:02 on Dec 5, 2012

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

evil_bunnY posted:

If it's all VM's why not SRM?

Unless something changed recently, it is way outside our budget.

e: There are also a few legacy non-VM iSCSI luns and CIFS volumes hanging around that need to be replicated. I'm not sure if I will be able to ever fully get rid of them so if I can do all my replication at the storage appliance level that seems easier. We've already paid for the licensing there.

BangersInMyKnickers fucked around with this message at 00:31 on Dec 6, 2012

madsushi
Apr 19, 2009

Baller.
#essereFerrari

BangersInMyKnickers posted:

We're setting up storage replication between our primary and secondary NetApp units for a DR plan. Everything is in NFS volumes, so the plan is to replicate the changes nightly when activity is low. If the building burns, we mount up the the volumes on the backup hosts, import the VMs, and get back online in a couple hours.

The question I have is should I be concerned with trying to quiesce traffic before replication kicks off? The NetApp units generates a volume delta while the replication is happening so you're moving stable data. My assumption is that the state of the VMDKs as I bring them up (in the hopefully non-existent occasion that I actually have to do this) is that they will just think they had a hard crash at the time of replication, and everything we run including databases seems pretty resilient to hard crashes these days. Sure, in the case of databases there is going to be a little data loss because the log marker hasn't incremented after a little bit of data was written out. But that's maybe a few seconds worth of data and we're going to be doing 12 hour replication schedules which means we're going to be losing on average 6 hours worth of stateful data anyhow.

Is my gut right on this or do I have my head up my own rear end and really need to get the traffic quiesced with the NetApp VMware plugins?

Don't quiesce the data. You're just going to have awful performance during the VMware snapshot creation/deletion and it doesn't buy you anything at all.

Let NetApp take the snapshots at will (via VSC or a snapvault schedule) and then replicate it like that. You get a 'crash-consistent' backup that is going to work. Can you remember the last time that a VM failed to come up after you did a hard power/reset on it? The answer is 'never'.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

madsushi posted:

Let NetApp take the snapshots at will (via VSC or a snapvault schedule) and then replicate it like that. You get a 'crash-consistent' backup that is going to work. Can you remember the last time that a VM failed to come up after you did a hard power/reset on it? The answer is 'never'.
You've clearly never worked with either Oracle or XFS.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply