|
Internet Explorer posted:Yes, that is what I was trying to say earlier. Depending on the vendor the verbage can change. You can hard set it in UEFI/BIOS or you can set UEFI/BIOS to allow the OS to manage it, which ESXi supports. "Supports" I've seen so many more PSODs on HP ESXi servers set to OS control that I don't even mess with it any more. Just static high performance in the BIOS.
|
# ? Dec 17, 2014 08:13 |
|
|
# ? May 8, 2024 07:54 |
|
evol262 posted:CPU cstates shouldn't require messing with EFI unless you're completely disabling them. I'd be utterly amazed if there weren't a tunable for it somewhere in vmkernel I would've expected this as well when modifying it within vSphere client. However, upon checking the BIOS/EFI, I did note that it was on System DBPM for the CPU. I'm assuming this is how Dell shipped it. There is also an OS DBPM option, which probably explains why changing the profile in ESXi made no difference. I imagine if you're running a large datacenter with huge clusters, fine tuning the configuration would be vital. This is one server running in an office building so if it works, good enough. Moey posted:I thought it could be hard set in the bios, or turned to "guest os control"? This is what Dell calls OS DBPM I believe.
|
# ? Dec 17, 2014 23:43 |
|
Oh great. The VMware/e1000 bug that's been around for over a year now, and was supposedly fixed in 5.5 U2? Not so much.. We patched to 5.5 U2 a couple months ago because of this, and we just had another host crash. They're telling us to go to P03. Adding to the confusion is the original KB article not being updated, yet it's the first result of a Google search for "VMware e1000 psod". I wish I was still on 5.1
|
# ? Dec 17, 2014 23:53 |
|
KS posted:"Supports" HPs since the Gen6es have been notorious for lengthy and frequent SMI activity when ESXi's power saving settings are set to anything other than High Performance (aka, disabling ESXi's power control). Even then, HP officially recommends using Static High Performance and disabling all power saving functionality anyway. I'm not totally convinced it's HP's problem, but nor do I think it's necessarily VMware's. Seems like a disagreement on the approach. I haven't seen any Dells do it, but they're in the KB as well. Edit: Sweet, new avatar for newbies!
|
# ? Dec 18, 2014 00:05 |
|
The whole thing has been a problem for such a long time, I can only imagine it's Intel/AMD/Dell/HP/VMware/whoever-the-gently caress-else not being able to agree on anything. poo poo never seems to work right until you set it in BIOS/UEFI.
|
# ? Dec 18, 2014 02:24 |
|
Richard Noggin posted:Oh great. The VMware/e1000 bug that's been around for over a year now, and was supposedly fixed in 5.5 U2? Not so much.. We patched to 5.5 U2 a couple months ago because of this, and we just had another host crash. They're telling us to go to P03. Adding to the confusion is the original KB article not being updated, yet it's the first result of a Google search for "VMware e1000 psod". I've had to update 7 hosts at 4 clients to 5.5u3 to fix this, and so far it has fixed every single one. Fingers crossed it stays unfucked!
|
# ? Dec 18, 2014 03:27 |
|
Internet Explorer posted:The whole thing has been a problem for such a long time, I can only imagine it's Intel/AMD/Dell/HP/VMware/whoever-the-gently caress-else not being able to agree on anything. poo poo never seems to work right until you set it in BIOS/UEFI. There's a pretty unified interface for this. Maybe VMware does it badly? It's not "not being able to agree", since it works fine in Linux and BSD Kerpal posted:I would've expected this as well when modifying it within vSphere client. However, upon checking the BIOS/EFI, I did note that it was on System DBPM for the CPU. I'm assuming this is how Dell shipped it. There is also an OS DBPM option, which probably explains why changing the profile in ESXi made no difference. I imagine if you're running a large datacenter with huge clusters, fine tuning the configuration would be vital. This is one server running in an office building so if it works, good enough.
|
# ? Dec 18, 2014 07:00 |
|
For anyone using Veeam Free Edition, is there any way to schedule backups to run on an automated schedule? I understand completely that part of the paid suite is the ability to do exactly this, but is there a workaround that can be used in the interim to run on a schedule?
|
# ? Dec 23, 2014 00:08 |
|
Wicaeed posted:For anyone using Veeam Free Edition, is there any way to schedule backups to run on an automated schedule? If Veam has an API or something you could script a cron job to kick this off. Or if Veeam just runs on a windows machine and has any sort of CLI you could use a scheduled task.
|
# ? Dec 23, 2014 04:31 |
|
1000101 posted:If Veam has an API or something you could script a cron job to kick this off. Or if Veeam just runs on a windows machine and has any sort of CLI you could use a scheduled task. Nope, looks like that is all part of the Veeam paid edition. Oh well. Separate topic, but what is the going rate for vRealize Orchestrator? How is it licensed? In the past month or two I've had to set up some test environments for new products coming down the pipeline, and it's a very labor intensive process (Installing OS, configuring some settings in VMware (Networking, storage, etc)) for me. It looks like most of the VMware stuff could potentially be automated with Orchestrator, and maybe some of the OS configuration/configuration of IPs & networking as well.
|
# ? Dec 25, 2014 05:29 |
|
Wicaeed posted:Separate topic, but what is the going rate for vRealize Orchestrator? How is it licensed? Pretty sure the license is included with vCenter. How handy are you with javascript?
|
# ? Dec 25, 2014 15:24 |
|
parid posted:Pretty sure the license is included with vCenter. How handy are you with javascript? It is indeed included with your vCenter license. Just need to go turn it on and set it up.
|
# ? Dec 25, 2014 18:01 |
|
parid posted:Pretty sure the license is included with vCenter. How handy are you with javascript? So it is! And Javascript? Never touched it
|
# ? Dec 26, 2014 04:00 |
|
Wicaeed posted:
There's a fair amount of coverage in vCO without resorting to javascript and plenty of samples out there to get you through the stuff you do. Plenty of info here: http://www.vcoteam.info/
|
# ? Dec 26, 2014 11:05 |
|
I'm being forced headlong into this thread as my company has finally begun to implement "The Cloud" The decision has come down that we will be using OpenStack managed by Foreman. I've been told that OpenStack makes everything highly available and redundant, and none of our VM's will ever go down. We can add nodes at will and scale VM's vertically and horizontally as needed. After about 10 minutes of research I know this is not the case, and I'm afraid that our CIO has no idea what he's doing. I've had some limited experience with running VMware on a single computer, so I understand how you can divide up a single piece of metal into multiple servers, but I am struggling to get a grasp on IaaS. I've read through a lot of this thread including this post and beyond. I understand that there are different services Openstack; Nova, Cinder, Horizon, and so on. However, I do not understand how they work. For example; Let's say I have a bunch of VM's running and have maxed out my resources. What happens when one of my nodes (physical server) dies? Is all of that computing power distributed equally between all of the metal, or does OpenStack just choose which server to run the vm based on the #of cores? I am but a lowly helpdesk monkey and barely have a grasp on basic networking. I have no idea where to begin learning this. Is there a resource that can help me get a grasp on this ASAP?
|
# ? Dec 26, 2014 22:50 |
|
Good luck with that.
|
# ? Dec 26, 2014 23:06 |
|
Someone will probably better comment better than I however I'd say to an extent that OpenStack is analogous to AWS or Azure but running it in-house. Things like Nova, Swift, Glance, etc are the "chunks" that make it work.
|
# ? Dec 26, 2014 23:17 |
|
I'd run with it. Success or fail, you'll probably get a chance to work with some neat technology and it'll be a good career boost.
|
# ? Dec 26, 2014 23:21 |
|
GnarlyCharlie4u posted:I've been told that OpenStack makes everything highly available and redundant, and none of our VM's will ever go down. We can add nodes at will and scale VM's vertically and horizontally as needed. I feel that a generous description of OpenStack is that it's a collection of software for launching and managing kvm / qemu instances, and the hairy networking between them. It doesn't provide the magical ability to keep VMs up without downtime, it doesn't automatically provision new VMs for you when you need more compute resources, and it can't (AFAIK) migrate instances between compute nodes like VMWare's vMotion does. The whole point of it is to allow you to spin up + tear down VM instances at will, so it can be used as a platform for vertical/horizontal scaling. But a basic installation provides no intelligence in that regard; it won't monitor resource thresholds for you and automatically adjust to the capacity you need. You have to use a higher-level system (i.e. a PaaS) to do that. The Heat component helps perform some of this, but it's relatively new and I'm not familiar enough with it to comment on it. quote:For example; Let's say I have a bunch of VM's running and have maxed out my resources. What happens when one of my nodes (physical server) dies? Is all of that computing power distributed equally between all of the metal, or does OpenStack just choose which server to run the vm based on the #of cores? When OS decides which Compute node it'll provision your instance on, it takes into account the desired "flavor" (#CPUs / memory / disk) but I don't believe you can give it any more hints than that. From a tenant's POV, they're supposed to be completely unaware of the underlying hardware. (I'm not sure that's practically true, I don't have enough experience to say that authoritatively). quote:I am but a lowly helpdesk monkey and barely have a grasp on basic networking. I have no idea where to begin learning this. Is there a resource that can help me get a grasp on this ASAP?
|
# ? Dec 26, 2014 23:26 |
|
Tab8715 posted:Someone will probably better comment better than I however I'd say to an extent that OpenStack is analogous to AWS or Azure but running it in-house. Things like Nova, Swift, Glance, etc are the "chunks" that make it work. This is about all I understand of it. Dr. Arbitrary posted:I'd run with it. It's nearly impossible to get my boss to change his mind about things, even when he KNOWS he is wrong. I'm just trying to 1) keep him from breaking all of the things and 2) actually see a project to fruition for once.
|
# ? Dec 26, 2014 23:32 |
|
minato posted:You are correct. minato posted:Basic networking, or OpenStack networking? Because if you want to know how OpenStack networking works, then poke around RedHat's site. OpenStack's documentation is absolutely terrible, but RedHat and their RDO/"packstack" OpenStack distribution do a reasonable job of making it easier to understand what's happening under the covers. For example, this article is 1000x better than anything you'll find in the OpenStack docs: https://openstack.redhat.com/Networking_in_too_much_detail It appears my boss has COMPLETELY the wrong idea about what OpenStack is and how it works.
|
# ? Dec 26, 2014 23:42 |
|
It sounds like your CIO thinks OpenStack is just like vSphere but without the licensing costs.
|
# ? Dec 26, 2014 23:50 |
|
GnarlyCharlie4u posted:I'm being forced headlong into this thread as my company has finally begun to implement "The Cloud" lol. Amazon has instances randomly die from time to time, and they're freaking Amazon. Glad to see you recognize this is the case Have you looked at oVirt (or RHEV, Red Hat's paid and supported product built on it)? That might be a better fit depending on what you need. My company runs our own OpenStack private cloud in production, with about 270 VM's currently (and more on the way). Happy to answer questions. The main thing I'll warn you about is that it is a LOOOOOOOT of work to set up, tune and maintain. We have one engineer who spends about 90% of his time doing nothing but babysit OpenStack, with the rest of us backing him up. When it's humming along, it is pretty dang awesome. When it fails, prepare to break out the hard liquor because the traditional "ask google" option often does not exist. You may be the only person to ever have the problem you're seeing, or at least talk about it publicly. Often the only relevant search result is a link to the source code. OpenStack is absolutely not as turnkey as VMware. What you "save" on licensing you will end up spending in man hours. Regarding your specific question, OpenStack does not have HA features like VMware out of the box. Read up on the "cattle vs pets" metaphor. If a VM dies, or a whole compute node, it will not automatically reboot the affected guests on another machine. OpenStack is designed to be a cloud computing platform, and in the cloud, failure is meant to be expected. Your application should be architected such that it doesn't care that webserver1234 randomly vanished. Servers which Absolutely Cannot Go Down are not good candidates for OpenStack. You can run them there, but be prepared for sadness. I don't mean to make this sound like all doom and gloom. OpenStack, especially beginning with the Icehouse release, is really impressive. I love running it. But don't kid yourself that it is going to be easy to set up or maintain. It's a complex, always-evolving beast. minato posted:I feel that a generous description of OpenStack is that it's a collection of software for launching and managing kvm / qemu instances, and the hairy networking between them. It doesn't provide the magical ability to keep VMs up without downtime, it doesn't automatically provision new VMs for you when you need more compute resources, and it can't (AFAIK) migrate instances between compute nodes like VMWare's vMotion does. OpenStack actually does support Live Migration (what VMware calls vMotion). It works great. I'm not sure if there's an analog to Storage vMotion or not, or if live migration works with local storage. I'd guess not. We run all of our VM's on shared storage so I've never looked into it. With Heat + Ceilometer you can set up autoscaling based on various metrics (all VM's in a pool have been at < 10% idle CPU for 10 minutes: boot another), although we are not doing this in production yet. On my 2015 wishlist. Dr. Arbitrary posted:I'd run with it. This. There aren't a ton of OpenStack jobs out there, but there are even fewer candidates who are actually proficient with it. Get that on your resume/LinkedIn and watch recruiters start blowing you up. Docjowles fucked around with this message at 23:56 on Dec 26, 2014 |
# ? Dec 26, 2014 23:54 |
|
evol262's post about OpenStack is a very good one, and I'd suggest you get your CIO to read it carefully, with giant <blink> tags around the Neutron networking section because getting networking right is very difficult. The engineers who manage my company's on-premise OS clusters have to be very experienced and competent, and your company will similarly have to invest a lot of time/money in managing a local OS installation. It was certainly beyond me; as a DevOps person who just wanted to play around with OS before I got tenant access to our clusters, I got a sandbox cluster installed on 1 machine via packstack fairly painlessly but installing a multi-node sandbox was an exercise in frustration that never fully worked properly. If it doesn't have to be on-premise, then let Rackspace or some other OpenStack provider deal with the pain. Also, be prepared for OpenStack version migrations (e.g. Havana to Icehouse) to be measured in quarters rather than days. If you're going to play around with OS to get a feel for it, then definitely use packstack to ease some of that pain. And even better, there are Docker images of various OpenStack components you can find at index.docker.io which will avoid the very lengthy install times. I concur with evol262 that you should initially select GRE or VLANs to let the Compute nodes communicate; they're easier than the alternatives.
|
# ? Dec 27, 2014 00:02 |
|
Thanks Ants posted:It sounds like your CIO thinks OpenStack is just like vSphere but without the licensing costs. He wants all the things. For free. Always. Free things are far superior. Docjowles posted:lol. Amazon has instances randomly die from time to time, and they're freaking Amazon. Glad to see you recognize this is the case So unless it's coming out of my pocket, it's not happening. I totally understand the pets vs cattle reference, which is why I started all this posting in the first place. I don't want to waste time herding cats. Docjowles posted:I don't mean to make this sound like all doom and gloom. OpenStack, especially beginning with the Icehouse release, is really impressive. I love running it. But don't kid yourself that it is going to be easy to set up or maintain. It's a complex, always-evolving beast. minato posted:evol262's post about OpenStack is a very good one, and I'd suggest you get your CIO to read it carefully, with giant <blink> tags around the Neutron networking section because getting networking right is very difficult. The engineers who manage my company's on-premise OS clusters have to be very experienced and competent, and your company will similarly have to invest a lot of time/money in managing a local OS installation. It was certainly beyond me; as a DevOps person who just wanted to play around with OS before I got tenant access to our clusters, I got a sandbox cluster installed on 1 machine via packstack fairly painlessly but installing a multi-node sandbox was an exercise in frustration that never fully worked properly. Again thank you all so much for your help, and for confirming my worst fears. Time to break out the I'll be back with more stupid questions.
|
# ? Dec 27, 2014 01:17 |
|
GnarlyCharlie4u posted:No. I'll check out oVirt tonight. RHEV is out of the question though. Every time I show him something that costs any amount of money, he wants all of those features, but for free.
|
# ? Dec 27, 2014 05:35 |
|
adorai posted:oVirt is pretty nice as a vSphere replacement, but you need to keep in mind it does require linux knowledge to implement and support. For most corporate environments, unless they are already deep with linux knowledge will benefit more from vSphere than from oVirt, from the simple fact that you can hire a vSphere schmuck off the street far easier than you can find someone who can support oVirt. My boss has a linux superiority complex. (See above post about Centos) So that's probably a good thing. My experience with Linux has been limited, but I'm not afraid to learn the hard way.
|
# ? Dec 27, 2014 07:33 |
|
GnarlyCharlie4u posted:My boss has a linux superiority complex. (See above post about Centos) So that's probably a good thing. seriously, get your boss to reconsider going from zero to openstack. It's like giving a 16 year old the keys to your helicopter instead of the honda civic.
|
# ? Dec 27, 2014 07:49 |
|
Not sure if this place but how do you end up with cattle as opposed to pets? How do create an application or where it doesn't matter if crashes and you just re-create it?
|
# ? Dec 27, 2014 08:59 |
|
For designing apps as cattle, look at the 12 Factor principles for a guide. That was based on how Heroku advised its users to design their apps. Most of the time it boils down to making your app stateless, and to have its config injected easily. This is quite straightforward for (say) a web service. Not everything can be 12-factor. Like, your database server can't easily treat its filesystem as an external service, but it can come pretty close if it's backed by a grunty NetApp filer. For hosts, you just need to start ripping out all the stuff that makes a specific host a special snowflake. No meticulously provisioned hostnames/IPs, no hand-installed services. You can expect to have some long-lived hosts (e.g. a monitoring server or a load balancer) but they should be easily and quickly replaceable if they fail. You can get the high-availability with something like keepalived, and spinning up a new one with the right software installed should be a fast automated process anyway (e.g. use Puppet or Docker). minato fucked around with this message at 09:38 on Dec 27, 2014 |
# ? Dec 27, 2014 09:27 |
|
Will this eventually become standard in application development? It seems like a great concept.
|
# ? Dec 27, 2014 09:42 |
|
Tab8715 posted:Will this eventually become standard in application development? It seems like a great concept. No, not everything can be shoe horned into this model. Anything that relies heavily on transactions or state for instance. Its difficult to make non trivial apps which can just scale out and most things don't need that level of reliability.
|
# ? Dec 27, 2014 12:50 |
|
Distributed systems are an order of magnitude harder to maintain, operate, and debug than non-distributed systems. You shouldn't use them unless you actually need the scale they permit. If you're using Cassandra as the backend for your office lunch-ordering application, you've probably hosed up.
|
# ? Dec 27, 2014 19:24 |
|
minato posted:For designing apps as cattle, look at the 12 Factor principles for a guide. That was based on how Heroku advised its users to design their apps. i like this except for the env variables as config, since it's hard to version that. I wish there was a better way Tab8715 posted:Not sure if this place but how do you end up with cattle as opposed to pets? How do create an application or where it doesn't matter if crashes and you just re-create it? The state that can be lost is not the eternal state
|
# ? Dec 27, 2014 19:47 |
|
Malcolm XML posted:i like this except for the env variables as config, since it's hard to version that. I wish there was a better way We're using a combination of files and environment vars to configure containers at spinup time. Files are for static configuration like tuning parameters and SSL certs. Those files are handled by the config management system (Puppet / Chef / etc) so it's versioned, and we can have dev/staging/prod/qa variations. The env vars are used for configuration determined at runtime, e.g. the IP/port of a log server. 12-factor is definitely useful when your application outgrows a single server and you need to make it distributed. I see a growing trend towards a Data-Center Operating System (DCOS). Apache Mesos + Marathon (which drive Twitter and AirBnB) is the most mature example of this. A DCOS is analogous to a kernel's scheduler but spread across many hosts. With the kernel, you submit a process to be run and it decides when and where to run it; what core will do the work, what memory area to use, etc. As the job submitter, you don't know or care about any of that, it's all abstracted away from you. With a DCOS it's the same thing - you submit a job to the DCOS and the DCOS decides which of it's array of hosts will get the job. (This sounds like a PaaS, but to my mind it's spiritually closer to an OS's scheduler) This works well when the DCOS can quickly send a job to a specific host. It's one reason why containerization technologies like Docker are so big right now, because the app container is so lightweight (relative to a VM image) that it can be deployed on any host in the cluster very quickly. I can send a command to a virgin host that will download my 5MB app container and have it running within a few seconds. With the old way, we'd have to spin up a new VM that contained the app, or maintain a host with Puppet where we'd install/configure the app before running it.
|
# ? Dec 27, 2014 20:30 |
|
I don't exactly know where to put this but here goes: Does anyone know if Linux bridging works under Hyper-V? I've been trying to setup bridging for LXC and I am having a hell of a time getting the containers to work. I've tried the following Linux sysctls: net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 To disable any filtering but I still can't get DHCP to work or ping another machine (but the Linux host ) from a container. I've created a br0 with brctl and everything looks right, but it does not work. This is making me doubt the networking part of Hyper-V. Any clues?
|
# ? Dec 28, 2014 18:19 |
|
Is this any good? http://www.cloudbase.it/hyper-v-promiscuous-mode/ I had to do something similar in vSphere to get a VPN concentrator to work properly.
|
# ? Dec 28, 2014 18:22 |
|
Thanks Ants posted:Is this any good? No dice, I hoped that would be it but it does not seem to work. It's also pretty horrible to pinpoint where it goes bad. If anyone else has an idea, please let me know.
|
# ? Dec 28, 2014 21:36 |
|
Mr Shiny Pants posted:Does anyone know if Linux bridging works under Hyper-V? I've been trying to setup bridging for LXC and I am having a hell of a time getting the containers to work.
|
# ? Dec 28, 2014 22:04 |
|
|
# ? May 8, 2024 07:54 |
|
minato posted:I think that's not a hard and fast rule, it's more about being easy to inject config at runtime. Env vars are just one way of doing that. Yeah i mean when I deploy a service it just gets distributed across a number of instances and scales up and down as needed and its only the one service that we hosed up architecting that's causing us pain. Re "DCOS" Azure has been doing this for years at least internally, god knows if/when they will expose it. Basically the way to make your service scale is to not do anything that would cause it to not scale.
|
# ? Dec 28, 2014 22:53 |