Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

minato posted:

Still on Essex. If it ain't broke...

We started out on Grizzly, and had no end of awful issues. Actually a lot like what VC just described. Commands would just fail for no reason until you sent them for the 8th time, and then magically it would go through and all was well. The fix was always "restart RabbitMQ and deal with 30 seconds of severe network disruption, then it's all better for a week or two." Upgrading to Icehouse cleared up 99% of that. Until we moved to Icehouse, I honestly would have never told anyone to even attempt deploying OpenStack. Grizzly was that bad.

Maybe Essex had less features and config, and there was less to go wrong? I'm also totally willing to cop to maybe having used incorrect configs. As noted, it's not like this poo poo is well documented by any stretch.

If it's working well for you, I wouldn't be in a hurry to touch anything, either. Although the irony of a "DevOps tool" like OpenStack being so fragile that you're terrified to change anything is not lost on me.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
One other problem is OpenStack is such a bear to set up, and takes so many system resources as dependencies, that most people don't have proper dev environments to test those changes in.

evol262
Nov 30, 2010
#!/usr/bin/perl
Rabbit sucks, honestly. I mean, it's great, but there's nowhere near enough messages in openstack to make a choice of broker relevant. Therefore, I always use amqp or qpid

Docjowles
Apr 9, 2009

evol262 posted:

Rabbit sucks, honestly. I mean, it's great, but there's nowhere near enough messages in openstack to make a choice of broker relevant. Therefore, I always use amqp or qpid

My impression has been that it's on the client side, but I could be way off base. I just know that it seems like when the connection to Rabbit is interrupted for 1 microsecond, all of the 5000 OpenStack logs start spamming stack traces and never recover until everything is restarted.

I'm with you on the "Rabbit sucks" bandwagon, though, for reasons totally outside of OpenStack.

Pile Of Garbage
May 28, 2007



Martytoof posted:

On the fringe of virtualization:

Do you guys have a recommended devops solution for deploying and configuring VMware templates? Ideally I'd like an AIO solution where I have a tool I can use to fill out a form, push a button, and it'll provision the VMware template, then use puppet or chef or ansible to configure the OS layer. I can figure out the latter part of this myself, but I'm looking for something that will also use vSphere to provision the VM itself.

I'm basically in analysis paralysis. There are so many devops tools that every time I try to figure this out I end up with 1000 Chrome tabs open and more questions than I started with.

A bit OT but can you recommend any good references for Chef? I've only just started mucking around with it on the side but am looking to use it for managing some of the VPSs I've got.

evol262
Nov 30, 2010
#!/usr/bin/perl

Docjowles posted:

I'm with you on the "Rabbit sucks" bandwagon, though, for reasons totally outside of OpenStack.

Is it because erlang's handling of hostnames is super lovely? Because that's why I actually hate it

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

Is it because erlang's handling of hostnames is super lovely? Because that's why I actually hate it
I absolutely love how it refuses to use DNS and I have to config-manage hosts files on all of my cluster nodes.

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

Docjowles posted:

My impression has been that it's on the client side, but I could be way off base. I just know that it seems like when the connection to Rabbit is interrupted for 1 microsecond, all of the 5000 OpenStack logs start spamming stack traces and never recover until everything is restarted.

I'm with you on the "Rabbit sucks" bandwagon, though, for reasons totally outside of OpenStack.

I'd be a lot more likely to blame Kombu/OpenStack than Rabbit for this, to be honest. I do backend engineering for OpenStack stuff now at my current employer and ran RabbitMQ clusters at prior employers, and Rabbit has always been pretty low on my list of concerns.

My OpenStack contribution: we currently run about 8 Havana clusters on Ubuntu 12.04, and my first major project after changing jobs was to migrate the codebase to Ubuntu 14.04/Kilo, assigned to me about 12 hours after Kilo dropped. Having come from the magical VMware universe where things Just Work but also things Cost A Lot Of Money, OpenStack has given me a lot of interesting experiences and a lot to think about. I admit I am skeptical that the project will survive without imploding under its own weight; I think there will always be a need for a private cloud offering like OpenStack, and for continuity's sake I hope it is OpenStack, but spend a few hours inside it and you come out really feeling like it's 20 different projects with 20 different visions and very little gluing it together.

Docjowles
Apr 9, 2009

chutwig posted:

I'd be a lot more likely to blame Kombu/OpenStack than Rabbit for this, to be honest. I do backend engineering for OpenStack stuff now at my current employer and ran RabbitMQ clusters at prior employers, and Rabbit has always been pretty low on my list of concerns.

Yup, that is what I meant by "client side". If a client (in this case, various OpenStack services) can't deal with the connection to Rabbit dropping for 1 second without completely blowing the gently caress up, that's not really the server's problem.

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

Docjowles posted:

Yup, that is what I meant by "client side". If a client (in this case, various OpenStack services) can't deal with the connection to Rabbit dropping for 1 second without completely blowing the gently caress up, that's not really the server's problem.

I missed the word "client". :v:

evol262
Nov 30, 2010
#!/usr/bin/perl

chutwig posted:

you come out really feeling like it's 20 different projects with 20 different visions and very little gluing it together.

This is the intentional openstack vision and it's kind of dumb, which is why I think we should become a real product. "All you need is keystone and a message queue!" But almost everyone also uses neutron, nova, swift, and glance, at least, with a lot of people picking up heat. Sure, the swiftstack people don't, but they can fork.

The rabbit problem you encountered may have been pika. Openstack is all python, and rabbit's python adapter isn't always the best codebase. Look how many issues there are about connections and be amazed it works at all

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?



Great post, to make sure I'm following...

  • DevOps is method of delivering software fast as features are worthless until users use them.
  • Waterfall a older method of software development. Development works forever on a huge software package then goes to ops goes install this now! Everything breaks, tons of crap doesn't work.
  • Agile is a newer method of software development. Faster releases, Operations still struggles to keep up.
  • DevOps. Works with the Dev and "automates" much of the environment. Things like Puppet, Chef, DSC, DHCP, Images of OS, etc

Okay, the above makes sense and I totally see why we're headed in that direction but one thing I'm a little perplexed about is the role of the System Administrator as it seems that the position is essentially becoming automated out-of-existence? Is the SA going to be around in a decade? To make things a little more confusing I'm not following the whole "Pets-vs-Cattle" where we're not suppose to have individual systems that need special attention but how does that apply to things like DHCPs/DNS/Database Servers?

EDIT - I know OpenStack wasn't addressed but to my understanding it's another IaaS Platform (On-Premise with your own hardware) with a bunch of project put together but isn't necessarily competing with standard IaaS providers like On-Prem Vert. or Azure,Amazon,EC2...

Gucci Loafers fucked around with this message at 01:30 on Jun 17, 2015

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
I see DevOps as "the owners of the deployment pipeline", which starts at developers, flows through testing, into staging (which arguably is part of testing) and finally out to production. Traditionally it's taken months for software to make it through that pipeline, but now the focus is on reducing the friction so much that devs can make multiple deployments per day if they so wish.

Agile = "Waterfall on a much smaller timescale, so that if/when it goes wrong, not too much is lost". There is still the "Design, Implement, Test, Deploy" software lifecycle, it's just on a 1-2 week iteration instead of a 6-month one. DevOps as a concept is independent of Agile, but strongly supports developers using Agile because Agile developers will be completing features and wanting to deploy them every iteration, much more frequently than Traditional Waterfall.

A DevOps person acts as a consultant to the Dev team, so that the product's path through that pipeline will be as fast and frictionless as possible. They have broad skills; they know how programs are built, how they are tested, and enough about systems / networks to know how to deploy them efficiently.

A ProdOps person is the traditional SysAdmin. These are the mechanics of the pipeline, with a primary focus on Production where resources are needed most and are most business-critical. Those are your Network Engineers, DBAs, and other types of SysAdmin.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Tab8715 posted:

To make things a little more confusing I'm not following the whole "Pets-vs-Cattle" where we're not suppose to have individual systems that need special attention but how does that apply to things like DHCPs/DNS/Database Servers?
There will always be some "snowflake" servers that need long uptime and high availability. But if infrastructure is represented as code, there isn't a single service that can't be easily recreated if it suddenly dies. In other words, you still may need some long-lived and reliable "cattle"... but if they get too sick, it should be easy to kill 'em off and get another one.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

minato posted:

I see DevOps as "the owners of the deployment pipeline", which starts at developers, flows through testing, into staging (which arguably is part of testing) and finally out to production. Traditionally it's taken months for software to make it through that pipeline, but now the focus is on reducing the friction so much that devs can make multiple deployments per day if they so wish.

...

A DevOps person acts as a consultant to the Dev team, so that the product's path through that pipeline will be as fast and frictionless as possible. They have broad skills; they know how programs are built, how they are tested, and enough about systems / networks to know how to deploy them efficiently.
These are all neat explanations of why companies constantly mis-name certain things "DevOps," but it has essentially nothing to do with DevOps in reality and this meaning is continuously disclaimed by every single person who had anything to do with actually getting the movement started. The very idea of a "DevOps person" is, in fact, completely antithetical to everything that DevOps has been trying to accomplish.

Did you know that the name Unix came about because in the 1870s, when it was invented, nobody had discovered genitals yet?

Vulture Culture fucked around with this message at 02:04 on Jun 17, 2015

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Tab8715 posted:

Great post, to make sure I'm following...

  • DevOps is method of delivering software fast as features are worthless until users use them.
  • Waterfall a older method of software development. Development works forever on a huge software package then goes to ops goes install this now! Everything breaks, tons of crap doesn't work.
  • Agile is a newer method of software development. Faster releases, Operations still struggles to keep up.
  • DevOps. Works with the Dev and "automates" much of the environment. Things like Puppet, Chef, DSC, DHCP, Images of OS, etc

Okay, the above makes sense and I totally see why we're headed in that direction but one thing I'm a little perplexed about is the role of the System Administrator as it seems that the position is essentially becoming automated out-of-existence? Is the SA going to be around in a decade? To make things a little more confusing I'm not following the whole "Pets-vs-Cattle" where we're not suppose to have individual systems that need special attention but how does that apply to things like DHCPs/DNS/Database Servers?

EDIT - I know OpenStack wasn't addressed but to my understanding it's another IaaS Platform (On-Premise with your own hardware) with a bunch of project put together but isn't necessarily competing with standard IaaS providers like On-Prem Vert. or Azure,Amazon,EC2...
DevOps is the methodology. DevOps doesn't work with anybody in particular, because DevOps is by definition not a role or function. Consider any other philosophy of organizing a company. If you said "we're going to have No Managers," like Zappos, you wouldn't have a person whose job was to be a No Manager, right? That would defeat the whole point.

You raise some interesting questions. What's clear is that systems administration is likely not going to be the same profession in 10 years that it is today. This is less because of private infrastructures like OpenStack, but because of public ones. What's really transformative about public cloud is that anyone inside the organization can do it with a credit card. You don't need buy-in from the IT department, there's no lead-time on servers, there's no infighting with other departments to see who gets the IT department's attention and resources. What we're seeing already within high-performing organizations like Netflix is that development and operations are often embedded into the same product teams -- the people who build it are responsible for operating it. We're seeing less focus on "centers of excellence," which is what most traditional IT departments are -- big collections of people with very specialized skills who are essentially farmed out to other departments as needed. What I think we're likely to see as public cloud reduces friction to getting applications online is that cloud tech specialists, probably with some level of systems administration competency, will end up embedded in app teams more widely. We'll also see traditional, non-coding administrators end up developing deeper skillsets (i.e. end-to-end performance management) within more focused centers of excellence, or otherwise dwindling in number while they manage things like ERP systems and weird line-of-business applications.

This isn't likely to happen soon. It isn't likely to happen in all industries, especially ones that are very risk-averse due to regulatory requirements or data sensitivity. (Public cloud is the nightmare scenario for people who fear Shadow IT.) But for new projects, we're already seeing the shift, and it's big.

"Pets vs. Cattle" is a concept that's started to annoy me because it's begun conflating a number of different business goals. I don't really like the CERN definition of cattle being something where, if it gets sick, you kill it and get a new one. If you have bare-metal backups of all of your systems, that minimally satisfies the cattle criteria, right? But I don't think that's a valuable goal. It doesn't get you further from the metal and able to properly leverage what cloud technology buys you. It's better to figure out the nuances of what cloud actually means, and target them, because in a lot of cases it's valuable to your business to do that.

Vulture Culture fucked around with this message at 02:22 on Jun 17, 2015

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I think that the system administrator is going still exist in the future, but they will look different and be far less numerous. It's not "the cloud" that is doing it, it's automation, which is really the driving force behind "the cloud". I can do a lot more work than a peer who doesn't know or desire to learn to script. Even rudimentary powershell scripts are an immense force multiplier these days. Businesses of a certain size will always need admins that have an excellent high level view of the systems they use, who can tie disparate systems together, and who can creatively solve problems. The real problem is that many of the surviving positions are going to be the already high level individuals, and then the helpdesk. There won't be as much of a need for a junior admin position, because let's be honest, most of their work can be automated away too.

Docjowles
Apr 9, 2009


I'm with VC on this one. So I'll just zero in on a point he did not address and give my own opinion.

quote:

Agile = "Waterfall on a much smaller timescale, so that if/when it goes wrong, not too much is lost". There is still the "Design, Implement, Test, Deploy" software lifecycle, it's just on a 1-2 week iteration instead of a 6-month one.

Not really. A key difference is that in an org practicing DevOps, the Ops team is present and actively included in the planning and design phase. If it were straight up waterfall, that would not be the case. The first time Ops would hear about a new feature is when they get a code drop and are told to deploy it. And sucks to be you if it blows up, it worked on my laptop at 1 request per minute LMAO!!!!1! That's why the metaphor is "waterfall". poo poo rolls downhill and there is zero feedback upstream. That sucks less on a 2 week time scale, but it still sucks.

In a DevOps setup, when a new feature is being hashed out, someone from Ops is there to say "uhhh we are gonna need 5,000 more servers to support that feature." Or to hear the person proposing that they move the whole app to from Python 2 to 3 and point out that we don't have Python 3 installed. Or conversely, to notify dev of a plan to upgrade servers to RHEL 7 and ask what impact that would have. They're not there to say no or shoot down ideas. Reflexively saying "no changes" is the old style of Ops we're actively trying to get away from. The point is to give feedback on the impact of the change, and a realistic estimate as to when it can be deployed. So the company as a whole can get on the same page and collectively hit its deadlines.

And if you're using the practices and tools that come along with DevOps, most those changes should be very quick to make. Thanks to config management, automated testing, infrastructure provisioned by API's, etc.

Docjowles fucked around with this message at 04:27 on Jun 17, 2015

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

I think that the system administrator is going still exist in the future, but they will look different and be far less numerous. It's not "the cloud" that is doing it, it's automation, which is really the driving force behind "the cloud". I can do a lot more work than a peer who doesn't know or desire to learn to script. Even rudimentary powershell scripts are an immense force multiplier these days. Businesses of a certain size will always need admins that have an excellent high level view of the systems they use, who can tie disparate systems together, and who can creatively solve problems. The real problem is that many of the surviving positions are going to be the already high level individuals, and then the helpdesk. There won't be as much of a need for a junior admin position, because let's be honest, most of their work can be automated away too.
I just checked my history, and my blog post on how over-automation will eat the IT middle class is now over 5 years old. It's still a ways off, but it's getting truer every day. There will be landowners and serfs. Advancement will be predicated on formal education and training, like every other profession that has eliminated its mobility in the name of ever-increasing efficiency.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


The whole automation of IT leaves somewhat depressed as the late 90s was what got me into technology. I didn't know it at the time but everything I've done growing up from installing games on my Dad's 386 to building a PC for LAN or setting up a Web Server just for fun has paid off. Not only in a technical sense but socially as I've meet some awesome people because we both loved technology but with "Plug-in-Play" becoming a reality things won't be the same. :smith:

While the world moves fast, corporations still move incredibly slow but the future is going to be much simpler. On other hand, I think I'll spend more of my free time learning to program.

Gucci Loafers fucked around with this message at 05:17 on Jun 17, 2015

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Point taken: DevOps is a methodology. My point of bringing up a "DevOps person" is that ultimately someone has to bridge the people whose expertise lies on either end of the deployment pipeline (i.e. Devs and Ops). As was rightly pointed out, the whole team follows the methodology by implementing CI, automation, etc... but I still feel there ideally are individuals in the team with a DevOps focus to make it all gel.

Re: Agile = Waterfall, I agree that Ops are present at the Design phase in order to prevent problems further down the track, but that wasn't what I was getting at; that practice would be useful for Traditional Waterfall too. The point was in response to "Agile is a new method of software development", and my aim was to point out that the SDLC doesn't change, it's just compressed. Agile is not some magical new way of doing things, it's just about recognizing that long SDLCs don't work very efficiently so it trades off volume for speed.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

minato posted:

Re: Agile = Waterfall, I agree that Ops are present at the Design phase in order to prevent problems further down the track, but that wasn't what I was getting at; that practice would be useful for Traditional Waterfall too. The point was in response to "Agile is a new method of software development", and my aim was to point out that the SDLC doesn't change, it's just compressed. Agile is not some magical new way of doing things, it's just about recognizing that long SDLCs don't work very efficiently so it trades off volume for speed.
Largely true, though you have crazytown people like Vasco Duarte and Woody Zuill advocating in favor of approaches that aspire to remove estimates from the equation altogether. Without those, it's much harder to make any kind of comparison to waterfall. On the other hand, I'm not sure it's still Agile, either.

Pile Of Garbage
May 28, 2007



Anyone have any recommendations for a tool to analyse VM logfiles? We've started doing Snap Backups of VMs with CommVault and I want to see the stun-times incurred by snapshot removal for a largish number of VMs.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I have not deployed this yet, but it is on my list to test out.

http://www.sexilog.fr/

Docjowles
Apr 9, 2009

I was gonna say just stand up ELK (or Splunk, if you're within the free tier) since they're great general purpose log analysis tools. But if there's an ELK appliance pre-tuned for VMware, that's even better.

Although I'm pretty sure I'd have to run a search and replace on every instance of "Sexilog" in the source before deploying it, because :lol: Also, it looks like Sexilog is still on Kibana 3. Version 4 is a decent upgrade in terms of usability and features.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

Moey posted:

I have not deployed this yet, but it is on my list to test out.

http://www.sexilog.fr/

I tried it a couple of months ago and it worked for like a day and then stopped. YMMV

Pile Of Garbage
May 28, 2007



Thanks for the suggestions. We've already got Splunk running internally so I'll probably use that. Sexilog looks interesting but as Docjowles said I don't think it would look good.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
You can easily just grab the filters and dashboard from sexilog for your own literary instance. It's got a lot of great stuff.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Sexilog might be good but I won't be deploying it anywhere thanks to the name.

ragzilla
Sep 9, 2005
don't ask me, i only work here


Gyshall posted:

Sexilog might be good but I won't be deploying it anywhere thanks to the name.

Just stand up your own ELK stack with one of the hundreds of tutorials (including theirs) and bring in their config files.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Since I am the only person at my workplace that would ever look at it, I am fine with deploying it as is. Outside of that, I entirely agree about the name.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
devops is a dumb buzzword for "clean up your own mess" namely that the best people to support a product are the ones who developed it, and so by having skin in the game when poo poo goes wrong promotes having a stable, successful product


thats it

the rest is fluff



also if you havent shot your pets you should consider doing so

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Vulture Culture posted:

"Pets vs. Cattle" is a concept that's started to annoy me because it's begun conflating a number of different business goals. I don't really like the CERN definition of cattle being something where, if it gets sick, you kill it and get a new one. If you have bare-metal backups of all of your systems, that minimally satisfies the cattle criteria, right? But I don't think that's a valuable goal. It doesn't get you further from the metal and able to properly leverage what cloud technology buys you. It's better to figure out the nuances of what cloud actually means, and target them, because in a lot of cases it's valuable to your business to do that.

that's the mostly operational definition of cattle though

it does mean that you can get into situations where your supervisor processes are so good they mask a few race conditions (oops)

Wicaeed
Feb 8, 2005

mayodreams posted:

I tried it a couple of months ago and it worked for like a day and then stopped. YMMV

Yep, same. Worked for like 2 days and was great, then the logs stopped populating and interface went unresponsive.

evol262
Nov 30, 2010
#!/usr/bin/perl

Malcolm XML posted:

devops is a dumb buzzword for "clean up your own mess" namely that the best people to support a product are the ones who developed it, and so by having skin in the game when poo poo goes wrong promotes having a stable, successful product

thats it

the rest is fluff
I'm gonna disagree with this 100%. Improved collaboration, breaking down the walls between armed camps of internal stakeholders who pass the buck back and forth between systems/network/database/dev whenever anything goes wrong, a shorter development pipeline, continuous integration to feed automated builds/deployments from that pipeline, reproducible deployments, infrastructure as code, and all the rest of the stuff that makes "devops" tick as a movement has nothing to do with product support. Moreover, it's that the best people to support a product are everyone involved, all at once, so the problems can quickly be found, fixed, and a new build pushed out instead of waiting 4 days for ops to escalate that ticket to dev who punts it to network.

Malcolm XML posted:

that's the mostly operational definition of cattle though
Bare-metal backups don't fit infrastructure as code. PXEbooting a foreman discovery image (or whatever) or installing from a Glance image via Ironic or something, yes. Cattle don't need backups.

Malcolm XML posted:

it does mean that you can get into situations where your supervisor processes are so good they mask a few race conditions (oops)
This has what to do with cattle? And why do you have supervisor processes instead of swarming service discovery, and shooting nodes that fall of out quorum or the swarm?

stubblyhead
Sep 13, 2007

That is treason, Johnny!

Fun Shoe
Our vmware environment is getting really close to capacity for memory, and I've been working on identifying where some fat can be trimmed. A number of running VMs report 0 CPU or RAM in the vsphere client and I can't figure out why. If I go back far enough in the performance tab I can find some data, but at varying times they just stop reporting. Rebooting the VM doesn't have any effect. When it's shut off the host shows a drop in memory and CPU usage, but it just can't seem to figure out where those resources go when the VM starts up again. Any suggestions?

kiwid
Sep 30, 2013

I haven't really hosed with VMware for over a year. Anyway, last night I decided to get my home lab up and running again and got everything installed. I setup freenas and exposed a zfs volume over iscsi. I connected to my VMware host and added the iscsi software adapter and setup the networking, etc. I then added the iscsi discovery IP and it found the static paths just fine. 3 to be exact, which is correct since I have 3 nics both on the host and the freenas box. So, VMware successfully sees my freenas LUN and the paths, but when I go into the storage part and try to create a datastore, VMware gets hung up on loading "Current disk layout".

(not my image)


Basically it gets stuck on the above but doesn't show any of the device information, it just says "loading" forever.

What could be wrong?

edit: And this happens on multiple VMware hosts.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

stubblyhead posted:

Our vmware environment is getting really close to capacity for memory, and I've been working on identifying where some fat can be trimmed. A number of running VMs report 0 CPU or RAM in the vsphere client and I can't figure out why. If I go back far enough in the performance tab I can find some data, but at varying times they just stop reporting. Rebooting the VM doesn't have any effect. When it's shut off the host shows a drop in memory and CPU usage, but it just can't seem to figure out where those resources go when the VM starts up again. Any suggestions?

Missing performance data can be a consequence of the SQL performance rollup jobs being missing. It's a common problem when the SQL DB has been moved. Might want to check that.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004382

Are all hosts/guests missing data at the same times?

Wicaeed
Feb 8, 2005

NippleFloss posted:

Missing performance data can be a consequence of the SQL performance rollup jobs being missing. It's a common problem when the SQL DB has been moved. Might want to check that.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004382

Are all hosts/guests missing data at the same times?

Since you seem to know about this, when do you know that you are running into performance issues with vCenter?

We've had our cluster up for about 2 years at this point, fairly problem free. We did upgrade from vCenter Essentials to Standard about a half year back, and then upgraded SQL from Express to Std when we hit our host limit on the DB.

Overall it's been performing sluggishly for a few months now, so I threw some vCPU's and RAM onto the VM (3vCPU/16GB RAM now). The datastore that it's on is a Nimble CS300 backed by 10Gbit storage, so quite beefy all around. Just checking the stats of the VM itself, I haven't noticed much load on the CPU, or even that much memory usage on the VM.

Within the OS itself though, Windows is showing as used 12GB or so memory, but it's not really all in use thanks to caching, etc.

Any tips?

Adbot
ADBOT LOVES YOU

Internet Explorer
Jun 1, 2005





Wicaeed, check this article to make sure your memory settings are appropriate.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021302

Also I would do 2 vCPU instead of 3. Although it's definitely better than it used to be I don't see the need for odd vCPU assignments.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply