|
Erwin posted:That should be fine as long as you rescan the backup repository, but you really should open a support case with Veeam so that A) they can make sure it's done the right way and B) they are made aware of their software doing dumb things. Veeam has improved a lot in the past couple of years, but they can't keep improving unless they know what is wrong with their product.
|
# ? Jan 6, 2015 00:05 |
|
|
# ? May 9, 2024 02:02 |
|
Hey, I never said they cared.
|
# ? Jan 6, 2015 00:07 |
|
Misogynist posted:Removing snapshots involves consolidation of the deltas into the source disk, so if you're experiencing a high amount of latency there I'd suspect an I/O starvation on your backend storage as the most likely culprit, especially if you're backing up very busy volumes. Ensure you have the spare I/O to handle the disk consolidation, or try to do your consolidation during periods where the disk is less busy. On older (non-VAAI) storage there can also be latency issues related to metadata updates, which require locks on the cluster filesystem, especially if you have a lot of thin-provisioned volumes that are constantly growing. If you have 30 VMs on the same datastore, try moving some to another datastore and see if the issue continues as severely. Thanks for the information. I'll see if I can go back to the storage team which assures me it's me and not them. I have plenty of datastores but something is screwy going on here. I'm installing v8 so hopefully the latency limit option helps.
|
# ? Jan 6, 2015 00:27 |
|
ghostinmyshell posted:Thanks for the information. I'll see if I can go back to the storage team which assures me it's me and not them. I have plenty of datastores but something is screwy going on here. I'm installing v8 so hopefully the latency limit option helps.
|
# ? Jan 6, 2015 01:16 |
|
adorai posted:Gotta love silos. I can assure you, it's always the storage. Nonsense, it's always the network. Unless you're using FC, then it's the queue-depth.
|
# ? Jan 6, 2015 01:43 |
|
NippleFloss posted:Nonsense, it's always the network. Unless you're using FC, then it's the queue-depth. Poppycock, it's the applications fault that it uses non-standard ports and RPC, we can't be opening those ports for network access because it's insecure
|
# ? Jan 6, 2015 04:15 |
|
So we've got a customer requirement come through which needs about 45 servers virtualised. Is there a good bit of software that I can run that will monitor said physical servers for a week and say "yeah based on this utilisation you probably need another 20gb of RAM and 2TB of storage for your virtual environment"? I seem to remember consultants doing it for a company I worked at years ago, but I have no idea what they used. Or does such a product not exist and its just a case of installing my favourite monitoring suite and adding a bunch of poo poo together in excel?
|
# ? Jan 6, 2015 13:52 |
|
Ahdinko posted:So we've got a customer requirement come through which needs about 45 servers virtualised. Is there a good bit of software that I can run that will monitor said physical servers for a week and say "yeah based on this utilisation you probably need another 20gb of RAM and 2TB of storage for your virtual environment"? I seem to remember consultants doing it for a company I worked at years ago, but I have no idea what they used. Or does such a product not exist and its just a case of installing my favourite monitoring suite and adding a bunch of poo poo together in excel? Do you not already have a monitoring environment which tells you utilization?
|
# ? Jan 6, 2015 15:11 |
|
Its a new customer, they haven't provided us access to their monitoring system yet but I was just wondering if there was a tool designed to do this already that I can just chuck an SNMP community into or something.
|
# ? Jan 6, 2015 15:25 |
|
Dell DPACK will do that.
|
# ? Jan 6, 2015 16:18 |
|
So if I'm installing ovirt on a host, how should I utilize the hardware raid capabilities of that machine? or should I not even use raid? The reason I ask is because, I know that Openstack recommends you initialize disks in raid 0 to take advantage of the controller's write memory just in case bad things happen.
|
# ? Jan 6, 2015 19:13 |
|
GnarlyCharlie4u posted:So if I'm installing ovirt on a host, how should I utilize the hardware raid capabilities of that machine? or should I not even use raid?
|
# ? Jan 6, 2015 19:22 |
|
It looks like you're reading the docs for Swift, which is the OpenStack analog to Amazon S3. Are you actually even going to be using that service? wyoak is right, you do not want to be doing RAID 0 on anything. I think the docs mention that just in case your RAID controller is annoying and does not have a JBOD mode. Swift wants individual, raw disks presented to it just like ZFS does. You can hack around that by setting up a bunch of "RAID 0" virtual disks with only one disk in each pool. oVirt is more like a traditional VMware host. For that I think you want to set it up like you would any other server. Either RAID 6 or RAID 10 depending on your capacity, data protection and iops requirements.
|
# ? Jan 6, 2015 19:34 |
|
GnarlyCharlie4u posted:So if I'm installing ovirt on a host, how should I utilize the hardware raid capabilities of that machine? or should I not even use raid? It doesn't matter, basically, unless you're using Gluster as a datastore with the disks on the host. Otherwise, you should always use remote storage (iSCSI LUN, NFS, whatever). Do not use a POSIX datacenter so you can use local storage. Export NFS from the host if you have to, but you should be using reliable storage somewhere else. Hosts are identified by the UUID from dmi, so losing a host and needing to reinstall it is basically no skin off your back as long as the engine and VMs are on remote storage. Use at least 2 hosts. Use remote storage. What you do with the disks on those hosts doesn't matter.
|
# ? Jan 6, 2015 20:10 |
|
GnarlyCharlie4u posted:So if I'm installing ovirt on a host, how should I utilize the hardware raid capabilities of that machine? or should I not even use raid?
|
# ? Jan 7, 2015 01:00 |
|
adorai posted:how ghetto do you want to be? In my home lab I boot from USB and have a gluster datastore that holds my hosted engine. Each host exports it's hard drives via iscsi, and I have another VM that uses RDM or whatever oVirt calls it to build a raid10 pool from those luns. it's a nice distributed SAN, but I doubt anyone else would do this kind of poo poo in a business environment. Your boss however, seems like he would he would like it. Why not have them export as ceph rbd or add them to a gluster pool? Exporting LUNs, importing them, building a raid on top of it, and using that as a datastore is something I've done with vmware (rbd to VMs, vms drbd with each other and export as NFS) for a team that didn't have a SAN, but there's no real need when everything talks gluster natively
|
# ? Jan 7, 2015 01:22 |
|
evol262 posted:Why not have them export as ceph rbd or add them to a gluster pool? Exporting LUNs, importing them, building a raid on top of it, and using that as a datastore is something I've done with vmware (rbd to VMs, vms drbd with each other and export as NFS) for a team that didn't have a SAN, but there's no real need when everything talks gluster natively
|
# ? Jan 7, 2015 01:31 |
|
Thanks Ants posted:Dell DPACK will do that. That looks perfect, thank you
|
# ? Jan 7, 2015 12:45 |
|
Docjowles posted:It looks like you're reading the docs for Swift, which is the OpenStack analog to Amazon S3. Are you actually even going to be using that service? wyoak is right, you do not want to be doing RAID 0 on anything. I think the docs mention that just in case your RAID controller is annoying and does not have a JBOD mode. Swift wants individual, raw disks presented to it just like ZFS does. You can hack around that by setting up a bunch of "RAID 0" virtual disks with only one disk in each pool. evol262 posted:It doesn't matter, basically, unless you're using Gluster as a datastore with the disks on the host.
|
# ? Jan 7, 2015 19:33 |
|
GnarlyCharlie4u posted:This is the case for all of my PE2850's and I think my 2900's Wait this is all being done on PE2850's? This story keeps getting more and more insane Is that just for labbing it up or will your production environment actually be running on 8-10 year old servers?
|
# ? Jan 7, 2015 19:45 |
|
Docjowles posted:Wait this is all being done on PE2850's? This story keeps getting more and more insane Is that just for labbing it up or will your production environment actually be running on 8-10 year old servers? not all of it, we just have a shitton of them doing nothing so my boss wanted to get them all cloudy I've got 2 c6100 (the xs-23 ty3 pieces of poo poo) Just finished building and fixing 3 dell 2900's 2 hp proliant dl580 G4 3 HP proliant DL 360 G5 a dell PE 6950 and attached MD1000 Plus whatever similar hardware we can free up once we begin to virtualize things. Yes, our production environment will be running on really old poo poo.
|
# ? Jan 7, 2015 19:53 |
|
Hooly gently caress. The only way this is going to save money over a few new hosts + vSphere Essentials Plus is if your time costs gently caress all. In which case, condolences.
|
# ? Jan 7, 2015 21:48 |
|
GnarlyCharlie4u posted:not all of it, we just have a shitton of them doing nothing so my boss wanted to get them all cloudy dude just quit.
|
# ? Jan 7, 2015 22:12 |
|
GnarlyCharlie4u posted:not all of it, we just have a shitton of them doing nothing so my boss wanted to get them all cloudy There is not enough alcohol in the world to be responsible for running production on recycled ancient hardware and a huge suite of complex software you have no experience with.
|
# ? Jan 7, 2015 22:18 |
|
After the afternoon I just spent where Meraki took down my production wifi, I feel much better.
|
# ? Jan 7, 2015 22:19 |
|
Jesus, we decommed our HP G5 servers a couple years ago, they're loving ancient at this point. I have some G6's I'm ready to put out to pasture as well.
|
# ? Jan 7, 2015 22:23 |
|
Moey posted:After the afternoon I just spent where Meraki took down my production wifi, I feel much better. Do tell. Considering buying them.
|
# ? Jan 7, 2015 22:32 |
|
I was considering it too but then I realized that all the config is done "in the cloud" and that if I ever decide not to renew my subscriptions I am stuck with the config as is until the end of time. That soured me enough on the concept to make me look elsewhere.
|
# ? Jan 7, 2015 22:37 |
|
KS posted:Do tell. Considering buying them.
|
# ? Jan 7, 2015 22:37 |
|
Number19 posted:I was considering it too but then I realized that all the config is done "in my butt" and that if I ever decide not to renew my subscriptions I am stuck with the config as is until the end of time. That soured me enough on the concept to make me look elsewhere. IIRC Meraki stuff will flat out stop working, it won't just hold the last config that was pushed to it. I don't see it as a huge issue, I just see the management licenses as being like the support contracts that I keep all the other stuff covered by.
|
# ? Jan 7, 2015 22:47 |
|
KS posted:Do tell. Considering buying them. I'll make a post about it in the wifi thread later, but the tier 1 tech made a change on a production ssid that took it down then claimed he couldn't t reverse it. I have the same SSID across all my campuses, just some with different VLAN tags, and some currently without (still need to address these). Every site that didn't have that SSID tagged stopped working.
|
# ? Jan 7, 2015 22:57 |
|
Rhymenoserous posted:dude just quit. and deprive the ticket thread of all my glorious no way. I'm actually on my way out, I just need to get hired elsewhere first. Worst case, I can treat this whole thing as my personal lab and have an opportunity to learn a lot about virtualization in the mean time. Quick question that I probably could have just googled but since I'm already posting here... CIO wants Centos 7 but the ovirt management engine can only handle 6.5 Would it be a bad idea to install centos 7 on a host then virtualize an instance of 6.5 to run the engine?
|
# ? Jan 8, 2015 00:03 |
|
I love my singular meraki wireless access point, sitting under my desk at home after I took the webinar to get a free one.
|
# ? Jan 8, 2015 01:14 |
|
GnarlyCharlie4u posted:Would it be a bad idea to install centos 7? Yes, until 7.1 is released (at the very least).
|
# ? Jan 8, 2015 02:39 |
|
GnarlyCharlie4u posted:CIO wants Centos 7 but the ovirt management engine can only handle 6.5 No, this is fine. EL7 has significant virt improvements over 6.5 or 6.6, and 7.0.z is out soon. Also, engine is fine on el7 now, I think. Ask in #ovirt on oftc. Most of the engine people are in Israel, so early US time is good to ask. Hosted engine on iscsi was a little broken last time I looked, but NFS (v3 or 4) on el7 should be ok. I'd be shocked if it didn't get pushed for the impending rhev 3.5 release
|
# ? Jan 8, 2015 06:02 |
|
I realize that this isn't a virtualization question, but does anyone have experience with Huawei CloudEngine switches? They're about half the price of comparative switches, and networking is kind of their thing, I just wonder if they're total crap, or if they're worthwhile.
|
# ? Jan 8, 2015 23:49 |
|
What's the general consensus on Hyper-V? I run a fairly small VMware shop (mostly Std licensing) with around 20 hosts and 200 or so VMs. We're looking at licensing for Windows server right now for one of our products which has a fairly high memory footprint requirement. Historically it has run only on 2008 R2 (Or Windows 7 Pro because it allows us to use 192GB of RAM compared to 32GB on 2008 R2 Std). For us to run it on 2008 R2 we need to run the Enterprise edition now to get around the 32GB memory limitation, which requires us to purchase 2012R2 Datacenter to obtain those specific downgrade rights. We're talking about purchasing 100 or so licenses of Datacenter purely not for visualization use, but I'm realizing now that if we do this, it would give us a fairly decent entry point into Hyper-V (at least for one of our datacenters). I'm a little bit hesitant to use Hyper-V, if only because I haven't really used it before. We're also actually looking at deploying a small Azure-based solution this year, which from what I've heard integrates fairly well with Hyper-V. Am I going to be sad if I give up on my dreams of deploying a VMware-based virtualization platform for our datacenter and go with Hyper-V instead?
|
# ? Jan 9, 2015 02:14 |
|
Hyper-V looks attractive once you're buying Datacenter licenses, but with the recent changes around the licensing of System Center I'd see what the actual price is to get something comparable to a vSphere Standard w/vCenter.
|
# ? Jan 9, 2015 02:23 |
|
Haha holy poo poo datacenter licensing is almost 6k per server You'd be an idiot (or someone who didn't plan properly) to purchase it and not use virtualization.
|
# ? Jan 9, 2015 02:54 |
|
|
# ? May 9, 2024 02:02 |
|
My understanding is that you need only one copy of data center per virtual host, be it Hyper-V or VMware, and you can have unlimited virtual instances on that host. Then again, Microsoft licensing is black magic, so who really knows.
|
# ? Jan 9, 2015 03:17 |