Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Erwin posted:

That should be fine as long as you rescan the backup repository, but you really should open a support case with Veeam so that A) they can make sure it's done the right way and B) they are made aware of their software doing dumb things. Veeam has improved a lot in the past couple of years, but they can't keep improving unless they know what is wrong with their product.
I'm so happy to hear that Veeam now cares about what's wrong with their product :)

Adbot
ADBOT LOVES YOU

Erwin
Feb 17, 2006

Hey, I never said they cared.

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.

Misogynist posted:

Removing snapshots involves consolidation of the deltas into the source disk, so if you're experiencing a high amount of latency there I'd suspect an I/O starvation on your backend storage as the most likely culprit, especially if you're backing up very busy volumes. Ensure you have the spare I/O to handle the disk consolidation, or try to do your consolidation during periods where the disk is less busy. On older (non-VAAI) storage there can also be latency issues related to metadata updates, which require locks on the cluster filesystem, especially if you have a lot of thin-provisioned volumes that are constantly growing. If you have 30 VMs on the same datastore, try moving some to another datastore and see if the issue continues as severely.

Thanks for the information. I'll see if I can go back to the storage team which assures me it's me and not them. I have plenty of datastores but something is screwy going on here. I'm installing v8 so hopefully the latency limit option helps.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

ghostinmyshell posted:

Thanks for the information. I'll see if I can go back to the storage team which assures me it's me and not them. I have plenty of datastores but something is screwy going on here. I'm installing v8 so hopefully the latency limit option helps.
Gotta love silos. I can assure you, it's always the storage.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

Gotta love silos. I can assure you, it's always the storage.

Nonsense, it's always the network. Unless you're using FC, then it's the queue-depth.

Wicaeed
Feb 8, 2005

NippleFloss posted:

Nonsense, it's always the network. Unless you're using FC, then it's the queue-depth.

Poppycock, it's the applications fault that it uses non-standard ports and RPC, we can't be opening those ports for network access because it's insecure

Ahdinko
Oct 27, 2007

WHAT A LOVELY DAY
So we've got a customer requirement come through which needs about 45 servers virtualised. Is there a good bit of software that I can run that will monitor said physical servers for a week and say "yeah based on this utilisation you probably need another 20gb of RAM and 2TB of storage for your virtual environment"? I seem to remember consultants doing it for a company I worked at years ago, but I have no idea what they used. Or does such a product not exist and its just a case of installing my favourite monitoring suite and adding a bunch of poo poo together in excel?

evol262
Nov 30, 2010
#!/usr/bin/perl

Ahdinko posted:

So we've got a customer requirement come through which needs about 45 servers virtualised. Is there a good bit of software that I can run that will monitor said physical servers for a week and say "yeah based on this utilisation you probably need another 20gb of RAM and 2TB of storage for your virtual environment"? I seem to remember consultants doing it for a company I worked at years ago, but I have no idea what they used. Or does such a product not exist and its just a case of installing my favourite monitoring suite and adding a bunch of poo poo together in excel?

Do you not already have a monitoring environment which tells you utilization?

Ahdinko
Oct 27, 2007

WHAT A LOVELY DAY
Its a new customer, they haven't provided us access to their monitoring system yet but I was just wondering if there was a tool designed to do this already that I can just chuck an SNMP community into or something.

Thanks Ants
May 21, 2004

#essereFerrari


Dell DPACK will do that.

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof
So if I'm installing ovirt on a host, how should I utilize the hardware raid capabilities of that machine? or should I not even use raid?
The reason I ask is because, I know that Openstack recommends you initialize disks in raid 0 to take advantage of the controller's write memory just in case bad things happen.

wyoak
Feb 14, 2005

a glass case of emotion

Fallen Rib

GnarlyCharlie4u posted:

So if I'm installing ovirt on a host, how should I utilize the hardware raid capabilities of that machine? or should I not even use raid?
The reason I ask is because, I know that Openstack recommends you initialize disks in raid 0 to take advantage of the controller's write memory just in case bad things happen.
I'm not 100% sure what you're getting at here but don't use RAID 0 ever (yes there are exceptions, but if you're asking about it, you probably shouldn't be using it). 'Bad things happen' is the reason you don't use RAID 0.

Docjowles
Apr 9, 2009

It looks like you're reading the docs for Swift, which is the OpenStack analog to Amazon S3. Are you actually even going to be using that service? wyoak is right, you do not want to be doing RAID 0 on anything. I think the docs mention that just in case your RAID controller is annoying and does not have a JBOD mode. Swift wants individual, raw disks presented to it just like ZFS does. You can hack around that by setting up a bunch of "RAID 0" virtual disks with only one disk in each pool.

oVirt is more like a traditional VMware host. For that I think you want to set it up like you would any other server. Either RAID 6 or RAID 10 depending on your capacity, data protection and iops requirements.

evol262
Nov 30, 2010
#!/usr/bin/perl

GnarlyCharlie4u posted:

So if I'm installing ovirt on a host, how should I utilize the hardware raid capabilities of that machine? or should I not even use raid?
The reason I ask is because, I know that Openstack recommends you initialize disks in raid 0 to take advantage of the controller's write memory just in case bad things happen.

It doesn't matter, basically, unless you're using Gluster as a datastore with the disks on the host.

Otherwise, you should always use remote storage (iSCSI LUN, NFS, whatever). Do not use a POSIX datacenter so you can use local storage. Export NFS from the host if you have to, but you should be using reliable storage somewhere else.

Hosts are identified by the UUID from dmi, so losing a host and needing to reinstall it is basically no skin off your back as long as the engine and VMs are on remote storage.

Use at least 2 hosts. Use remote storage. What you do with the disks on those hosts doesn't matter.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

GnarlyCharlie4u posted:

So if I'm installing ovirt on a host, how should I utilize the hardware raid capabilities of that machine? or should I not even use raid?
The reason I ask is because, I know that Openstack recommends you initialize disks in raid 0 to take advantage of the controller's write memory just in case bad things happen.
how ghetto do you want to be? In my home lab I boot from USB and have a gluster datastore that holds my hosted engine. Each host exports it's hard drives via iscsi, and I have another VM that uses RDM or whatever oVirt calls it to build a raid10 pool from those luns. it's a nice distributed SAN, but I doubt anyone else would do this kind of poo poo in a business environment. Your boss however, seems like he would he would like it.

evol262
Nov 30, 2010
#!/usr/bin/perl

adorai posted:

how ghetto do you want to be? In my home lab I boot from USB and have a gluster datastore that holds my hosted engine. Each host exports it's hard drives via iscsi, and I have another VM that uses RDM or whatever oVirt calls it to build a raid10 pool from those luns. it's a nice distributed SAN, but I doubt anyone else would do this kind of poo poo in a business environment. Your boss however, seems like he would he would like it.

Why not have them export as ceph rbd or add them to a gluster pool? Exporting LUNs, importing them, building a raid on top of it, and using that as a datastore is something I've done with vmware (rbd to VMs, vms drbd with each other and export as NFS) for a team that didn't have a SAN, but there's no real need when everything talks gluster natively

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

evol262 posted:

Why not have them export as ceph rbd or add them to a gluster pool? Exporting LUNs, importing them, building a raid on top of it, and using that as a datastore is something I've done with vmware (rbd to VMs, vms drbd with each other and export as NFS) for a team that didn't have a SAN, but there's no real need when everything talks gluster natively
I have other reasons, but mainly because it's my lab.

Ahdinko
Oct 27, 2007

WHAT A LOVELY DAY

Thanks Ants posted:

Dell DPACK will do that.

That looks perfect, thank you

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof

Docjowles posted:

It looks like you're reading the docs for Swift, which is the OpenStack analog to Amazon S3. Are you actually even going to be using that service? wyoak is right, you do not want to be doing RAID 0 on anything. I think the docs mention that just in case your RAID controller is annoying and does not have a JBOD mode. Swift wants individual, raw disks presented to it just like ZFS does. You can hack around that by setting up a bunch of "RAID 0" virtual disks with only one disk in each pool.
oVirt is more like a traditional VMware host. For that I think you want to set it up like you would any other server. Either RAID 6 or RAID 10 depending on your capacity, data protection and iops requirements.
This is the case for all of my PE2850's and I think my 2900's

evol262 posted:

It doesn't matter, basically, unless you're using Gluster as a datastore with the disks on the host.
Otherwise, you should always use remote storage (iSCSI LUN, NFS, whatever). Do not use a POSIX datacenter so you can use local storage. Export NFS from the host if you have to, but you should be using reliable storage somewhere else.
Hosts are identified by the UUID from dmi, so losing a host and needing to reinstall it is basically no skin off your back as long as the engine and VMs are on remote storage.
Use at least 2 hosts. Use remote storage. What you do with the disks on those hosts doesn't matter.
Thank you.

Docjowles
Apr 9, 2009

GnarlyCharlie4u posted:

This is the case for all of my PE2850's and I think my 2900's

Thank you.

Wait this is all being done on PE2850's? This story keeps getting more and more insane :allears: Is that just for labbing it up or will your production environment actually be running on 8-10 year old servers?

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof

Docjowles posted:

Wait this is all being done on PE2850's? This story keeps getting more and more insane :allears: Is that just for labbing it up or will your production environment actually be running on 8-10 year old servers?

not all of it, we just have a shitton of them doing nothing so my boss wanted to get them all cloudy

I've got 2 c6100 (the xs-23 ty3 pieces of poo poo)
Just finished building and fixing 3 dell 2900's
2 hp proliant dl580 G4
3 HP proliant DL 360 G5
a dell PE 6950 and attached MD1000

Plus whatever similar hardware we can free up once we begin to virtualize things.
Yes, our production environment will be running on really old poo poo.

Thanks Ants
May 21, 2004

#essereFerrari


Hooly gently caress. The only way this is going to save money over a few new hosts + vSphere Essentials Plus is if your time costs gently caress all. In which case, condolences.

Rhymenoserous
May 23, 2008

GnarlyCharlie4u posted:

not all of it, we just have a shitton of them doing nothing so my boss wanted to get them all cloudy

I've got 2 c6100 (the xs-23 ty3 pieces of poo poo)
Just finished building and fixing 3 dell 2900's
2 hp proliant dl580 G4
3 HP proliant DL 360 G5
a dell PE 6950 and attached MD1000

Plus whatever similar hardware we can free up once we begin to virtualize things.
Yes, our production environment will be running on really old poo poo.

dude just quit.

jre
Sep 2, 2011

To the cloud ?



GnarlyCharlie4u posted:

not all of it, we just have a shitton of them doing nothing so my boss wanted to get them all cloudy

I've got 2 c6100 (the xs-23 ty3 pieces of poo poo)
Just finished building and fixing 3 dell 2900's
2 hp proliant dl580 G4
3 HP proliant DL 360 G5
a dell PE 6950 and attached MD1000

Plus whatever similar hardware we can free up once we begin to virtualize things.
Yes, our production environment will be running on really old poo poo.

There is not enough alcohol in the world to be responsible for running production on recycled ancient hardware and a huge suite of complex software you have no experience with.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
After the afternoon I just spent where Meraki took down my production wifi, I feel much better.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Jesus, we decommed our HP G5 servers a couple years ago, they're loving ancient at this point. I have some G6's I'm ready to put out to pasture as well.

KS
Jun 10, 2003
Outrageous Lumpwad

Moey posted:

After the afternoon I just spent where Meraki took down my production wifi, I feel much better.

Do tell. Considering buying them.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


I was considering it too but then I realized that all the config is done "in the cloud" and that if I ever decide not to renew my subscriptions I am stuck with the config as is until the end of time. That soured me enough on the concept to make me look elsewhere.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

KS posted:

Do tell. Considering buying them.
NO DON'T (unless you're an MSP or whatever).

Thanks Ants
May 21, 2004

#essereFerrari


Number19 posted:

I was considering it too but then I realized that all the config is done "in my butt" and that if I ever decide not to renew my subscriptions I am stuck with the config as is until the end of time. That soured me enough on the concept to make me look elsewhere.

IIRC Meraki stuff will flat out stop working, it won't just hold the last config that was pushed to it.

I don't see it as a huge issue, I just see the management licenses as being like the support contracts that I keep all the other stuff covered by.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

KS posted:

Do tell. Considering buying them.

I'll make a post about it in the wifi thread later, but the tier 1 tech made a change on a production ssid that took it down then claimed he couldn't t reverse it. I have the same SSID across all my campuses, just some with different VLAN tags, and some currently without (still need to address these). Every site that didn't have that SSID tagged stopped working.

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof

Rhymenoserous posted:

dude just quit.

and deprive the ticket thread of all my glorious :gonk:
no way.

I'm actually on my way out, I just need to get hired elsewhere first.
Worst case, I can treat this whole thing as my personal lab and have an opportunity to learn a lot about virtualization in the mean time.

Quick question that I probably could have just googled but since I'm already posting here...
CIO wants Centos 7 but the ovirt management engine can only handle 6.5
Would it be a bad idea to install centos 7 on a host then virtualize an instance of 6.5 to run the engine?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I love my singular meraki wireless access point, sitting under my desk at home after I took the webinar to get a free one.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


GnarlyCharlie4u posted:

Would it be a bad idea to install centos 7?

Yes, until 7.1 is released (at the very least).

evol262
Nov 30, 2010
#!/usr/bin/perl

GnarlyCharlie4u posted:

CIO wants Centos 7 but the ovirt management engine can only handle 6.5
Would it be a bad idea to install centos 7 on a host then virtualize an instance of 6.5 to run the engine?

No, this is fine. EL7 has significant virt improvements over 6.5 or 6.6, and 7.0.z is out soon.

Also, engine is fine on el7 now, I think. Ask in #ovirt on oftc. Most of the engine people are in Israel, so early US time is good to ask. Hosted engine on iscsi was a little broken last time I looked, but NFS (v3 or 4) on el7 should be ok. I'd be shocked if it didn't get pushed for the impending rhev 3.5 release

Serfer
Mar 10, 2003

The piss tape is real



I realize that this isn't a virtualization question, but does anyone have experience with Huawei CloudEngine switches? They're about half the price of comparative switches, and networking is kind of their thing, I just wonder if they're total crap, or if they're worthwhile.

Wicaeed
Feb 8, 2005
What's the general consensus on Hyper-V?

I run a fairly small VMware shop (mostly Std licensing) with around 20 hosts and 200 or so VMs.

We're looking at licensing for Windows server right now for one of our products which has a fairly high memory footprint requirement. Historically it has run only on 2008 R2 (Or Windows 7 Pro because it allows us to use 192GB of RAM compared to 32GB on 2008 R2 Std). For us to run it on 2008 R2 we need to run the Enterprise edition now to get around the 32GB memory limitation, which requires us to purchase 2012R2 Datacenter to obtain those specific downgrade rights.

We're talking about purchasing 100 or so licenses of Datacenter purely not for visualization use, but I'm realizing now that if we do this, it would give us a fairly decent entry point into Hyper-V (at least for one of our datacenters).

I'm a little bit hesitant to use Hyper-V, if only because I haven't really used it before. We're also actually looking at deploying a small Azure-based solution this year, which from what I've heard integrates fairly well with Hyper-V.

Am I going to be sad if I give up on my dreams of deploying a VMware-based virtualization platform for our datacenter and go with Hyper-V instead?

Thanks Ants
May 21, 2004

#essereFerrari


Hyper-V looks attractive once you're buying Datacenter licenses, but with the recent changes around the licensing of System Center I'd see what the actual price is to get something comparable to a vSphere Standard w/vCenter.

Wicaeed
Feb 8, 2005
Haha holy poo poo datacenter licensing is almost 6k per server :stare:

You'd be an idiot (or someone who didn't plan properly) to purchase it and not use virtualization.

Adbot
ADBOT LOVES YOU

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
My understanding is that you need only one copy of data center per virtual host, be it Hyper-V or VMware, and you can have unlimited virtual instances on that host. Then again, Microsoft licensing is black magic, so who really knows.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply