|
Tab8715 posted:Not sure if this place but how do you end up with cattle as opposed to pets? How do create an application or where it doesn't matter if crashes and you just re-create it? Tab8715 posted:Will this eventually become standard in application development? It seems like a great concept. A few things. As usual, I agree with Misogynist. Not ALL software should work this way. The method shines when you're talking about "web scale" stuff that has to be able to handle millions of requests per day. It can be beneficial in other situations but for super high scale apps, distributed systems are often the only viable design. As to whether it will become the standard, if you are working at web scale, then it is already the standard. Companies that aren't doing these things are either failing, or succeeding in spite of themselves--and will eat poo poo when a faster, learner competitor enters the market. Funny enough, there was a timely article on this in the DevOps Weekly newsletter today: How to start learning high scalability. The author is Brazilian so it's a little hard to parse but he provides a lot of good links to further reading. DevOps Weekly is a good newsletter and I recommend anyone interested in these topics subscribe. There's zero spam involved. Another site, High Scalability is a great blog that does case studies on how many incredibly popular websites or mobile apps function behind the scenes. The 12 Factor App link minato posted is outstanding and even if you disagree with individual points (like using env vars for all config), on the whole it is a good quick summary of what we're talking about. Finally, ask your boss to buy you a copy of The Practice of Cloud System Administration (not a referral link). It just came out this year and I'm working my way through it now. It's an outstanding high-level guide to concepts like "cattle vs pets", distributed systems, cloud computing, ~~~~~DevOps~~~~~ and so on. It's a very wide and moderately deep book with jumping off points for anything you really want to dig into further. Docjowles fucked around with this message at 07:13 on Dec 29, 2014 |
# ? Dec 29, 2014 07:10 |
|
|
# ? May 8, 2024 06:50 |
|
Well my problem of Linux bridging not working in Hyper-V turned out be a simple checkbox. Turn on "Enable MAC address spoofing" in the advanced features of the VM nic and BAM! it works. Now to get that day back I spent troubleshooting this.......
|
# ? Dec 29, 2014 21:37 |
|
It's a miracle. I somehow managed to talk sense into my CIO, and got him to take a look at oVirt. Thank you all so much for your help and links. I think he finally understands the difference between, and the fact that we pretty much only have, pets not cattle. He's still adamant about implementing Openstack, just not for everything (or at this point anything). It's oVirt for pets, OS for cattle. I told him that it's fine because the two can play together nicely. Now he wants a plan for making it happen. So I have about a week or so to learn both oVirt and Openstack, and get them to share resources. It's not going to be pretty, but at least I don't have to worry about having to deal with a massacre of pets in openstack. That being said, do you know of any good articles on oVert and Openstack convergence? So far I've only looked at http://www.ovirt.org/Features/OSN_Integration (updated today)
|
# ? Dec 29, 2014 23:49 |
|
GnarlyCharlie4u posted:So I have about a week or so to learn both oVirt and Openstack, and get them to share resources. This will end well
|
# ? Dec 30, 2014 01:09 |
|
GnarlyCharlie4u posted:a massacre of pets This is an excellent turn of phrase to describe what is about to happen to you.
|
# ? Dec 30, 2014 03:33 |
|
GnarlyCharlie4u posted:So I have about a week or so to learn both oVirt and Openstack, and get them to share resources.
|
# ? Dec 30, 2014 03:50 |
|
Why aren't you just using VMware again? Or hell, Hyper-V or XenServer if the situation is really that dire. Just send him some fluff whitepapers or whatever the gently caress he read to make him think oVirt and Openstack was a good fit for your organization. Or better yet, find a new job. Jesus Christ.
|
# ? Dec 30, 2014 05:57 |
|
GnarlyCharlie4u posted:It's a miracle. I somehow managed to talk sense into my CIO, and got him to take a look at oVirt. Thank you all so much for your help and links. What size are you?
|
# ? Dec 30, 2014 13:02 |
|
Bitch Stewie posted:What size are you? 'Bout a sixteen, sixteen and a half?
|
# ? Dec 30, 2014 14:00 |
|
I've got some physical servers that I'm having a hard time virtualizing because they use stuff like parallel ports or DVI video outputs. How difficult is this kind of thing?
|
# ? Dec 30, 2014 17:20 |
|
What do these servers do? Not everything is a candidate for virtualisation.
|
# ? Dec 30, 2014 17:37 |
|
Dr. Arbitrary posted:I've got some physical servers that I'm having a hard time virtualizing because they use stuff like parallel ports or DVI video outputs. How difficult is this kind of thing? Thanks Ants posted:What do these servers do? Not everything is a candidate for virtualisation. I'm guessing something with a parallel port dongle is a very legacy application, though, not something especially resource-intensive. Vulture Culture fucked around with this message at 17:45 on Dec 30, 2014 |
# ? Dec 30, 2014 17:40 |
|
I did not know about USB devices being shared around via IP, that's very handy.
|
# ? Dec 30, 2014 17:46 |
|
Thanks Ants posted:I did not know about USB devices being shared around via IP, that's very handy.
|
# ? Dec 30, 2014 19:52 |
|
Misogynist posted:A lot more now than they used to be, I think. Even very CPU-intensive systems are often going to stop short of taking up the 20+ cores you see on modern 1U/2U servers, though unless you're getting a high enough density, multiple physical servers might still be cheaper than the vSphere licensing.
|
# ? Dec 31, 2014 00:48 |
|
Thanks Ants posted:What do these servers do? Not everything is a candidate for virtualisation. I've got a variety of things, some is old loving legacy software that wants to talk via RS-232 or computers that only exist to play a 20 second video on loop forever and output it to a DVI cable. I'm thinking the best option will be to replace these 10 year old, full tower PCs with small form factor PCs that exactly the outputs I need.
|
# ? Dec 31, 2014 04:43 |
|
The RS-232 stuff might be replaceable with something like http://www.brainboxes.com/ethernet-to-serial As far as the computer is concerned once the drivers are installed it's a hardware serial port. With the exception I think of dumping voltage across serial pins, check first. Stuff that loops video I'd probably look to replace with a digital signage player.
|
# ? Dec 31, 2014 13:10 |
|
teamdest posted:This is an excellent turn of phrase to describe what is about to happen to you. Internet Explorer posted:Why aren't you just using VMware again? Or hell, Hyper-V or XenServer if the situation is really that dire. Just send him some fluff whitepapers or whatever the gently caress he read to make him think oVirt and Openstack was a good fit for your organization. Or better yet, find a new job. Jesus Christ. He will absolutely not use any of those 3 and he refuses to give any argument as to why. Bitch Stewie posted:What size are you? I have another question for you virtualization gurus. Would it be better to virtualize one Windows Domain Controller taking advantage of oVirt's high availability or stick to Windows Server Failover Clustering between 2 separate domain controllers on identical hardware? I obviously don't have time to test one versus the other and I'm just fishing around for use cases or at least a more experienced opinion. Edit: I forgot to mention I'll be using Server 2012 GnarlyCharlie4u fucked around with this message at 18:42 on Dec 31, 2014 |
# ? Dec 31, 2014 17:58 |
|
GnarlyCharlie4u posted:I have another question for you virtualization gurus.
|
# ? Dec 31, 2014 18:14 |
|
GnarlyCharlie4u posted:I can sum all that up in one word: Politics. Show him articles about how AWS runs on XenServer (close enough) . Or just find a new job, like you mentioned. I can't imagine working for someone that dumb.
|
# ? Dec 31, 2014 19:45 |
|
My old boss refused to let me buy VMware licenses for our in-house dev/qa environment, even Essentials Plus (which would have been sufficient, it was a tiny shop). Instead we ran Google Ganeti with a bunch of mirrored DRBD volumes acting as "shared storage". If you're asking "wtf is Ganeti?"... exactly. It was a total bitch to use, lacked basic management features, often lost data or got out of sync, etc. I'm sure it runs great in-house at Google, where they have the devs who wrote it on staff and the resources to keep it up, but I was just one dude. I don't even remember what his beef was with VMware. Some bug in the ESX 3.5 era caused the site to go down so clearly it was untrustworthy poo poo and never again would he give them money. He had a bunch of hangups like that. "Oh one time the SAN firmware upgrade didn't go well, so we will never upgrade it again ever. Even if the vendor repeatedly warns us that there's a critical bug that is known to cause data loss." And so on. The kicker was that production DID run VMware (although it was 3 hosts on Essentials Plus and one standalone because they wouldn't spring for Standard licenses ). So we had two totally separate platforms to write tooling for and maintain. I am 100% certain we spent way more on my salaried time propping up the lovely Ganeti cluster than a pack of Essentials Plus licenses would have cost. And when there were problems, it's not like I could really ask anyone for help. Because there's 5 other people on earth outside of Google who run Ganeti. We also ran one app at AWS that he insisted needed to maintain 100% uptime and was super mission critical. Yet he would only allow me to boot one instance at a time, in one availability zone. Then when Amazon had a minor outage and the site was down for 20 minutes, it was somehow my fault. Some people are just irrational and have not kept up with modern trends and best practices. Or they go too far in the other direction, and see Google/Facebook/Amazon doing something so clearly that's the best practice for a 10 employee company, too. They are not fun to work for (so I don't anymore ). Docjowles fucked around with this message at 20:49 on Dec 31, 2014 |
# ? Dec 31, 2014 20:46 |
|
I am so glad I have a boss who trusts my judgement. Not to say I always get it right but some of the stuff on this thread, hell just this one page, is just so hosed up words fail me. The bit I'm curious about with all these hair-brained antics is do management (by which I mean business management) actually know what these idiots are doing and that they're putting the business at (varying levels of) risk with some of the insanity over either weird irrational prejudices or to save a few $$$?
|
# ? Dec 31, 2014 21:00 |
|
Bitch Stewie posted:I am so glad I have a boss who trusts my judgement. No. In my experience all they hear is Also, in the long run, it always costs more money to deal with a hosed up, Frankenstein-ed, amalgamation of free beta and RC solutions than it does to implement a tried and true stable platform that costs a few bucks. Politics in technology is one thing I will never understand. Denying one version/fork of anything because you have some irrational grudge against devs from 20 years ago, or because it is not the 'one true linux' is stupid and dangerous. On-topic: I have no idea what I'm doing. Tonight I'm going to celebrate the new year. Tomorrow I'll be tackling the virtualization of Windows. I'm shooting for a clustered Domain Controller virtualized on oVirt. It is my understanding that in a virtualized windows environment the AD is the weak point. So I need to read up on how to deploy and manage an AD-detached cluster in a VM. Then I get to figure out how to make the AD VM as robust as possible. This slideshow seems like a good place to start.
|
# ? Dec 31, 2014 21:18 |
|
AD in a virtual environment is simple so long as you don't do anything dumb. Run your DCs on different hosts and make sure your hosts and your PDC Emulator are pulling time from an external NTP source (unless you're on of those shops who have an internal switch do it or something). Time is the thing that seems to (still) catch people out with virtual and AD - get the time right and the rest looks after itself.
|
# ? Dec 31, 2014 21:23 |
|
I'll just nth that I have no idea what you're talking about when you say clustered DC is. The basic design of Active Directory is redundant, you just stand up as many Domain controllers as you feel you need. Two is a minimum, but depending on things you could need more. AD doesn't depend on any single domain controller being up (assuming you don't hardcode some third party product to point directly at a specific DC) it just depends on A domain controller to be up.
|
# ? Dec 31, 2014 21:40 |
|
FISHMANPET posted:I'll just nth that I have no idea what you're talking about when you say clustered DC is. The basic design of Active Directory is redundant, you just stand up as many Domain controllers as you feel you need. Two is a minimum, but depending on things you could need more. FISHMANPET posted:AD doesn't depend on any single domain controller being up (assuming you don't hardcode some third party product to point directly at a specific DC) it just depends on A domain controller to be up.
|
# ? Dec 31, 2014 22:09 |
|
I would just advise to walk before you run. Failover clustering isn't that hard, at least not on paper, but I wouldn't say it's step 1. First make sure your existing stuff isn't all hosed and that you have the skill to unfuck it, and then move on to clustering.
|
# ? Dec 31, 2014 22:12 |
|
GnarlyCharlie4u posted:I mean setting up a failover cluster for some of our services like SQL, DNS, and the print server. Yes it's that important and holy poo poo I want to set fire to every printer in the building. Not sure why you would cluster DNS, it's pretty much designed from the ground up to be redundant. On the print server unless you really need the uptime, HA through your hypervisor should be enough. SQL AlwaysOn clustering is included with a Standard license (active/passive, also need a witness server for automated failover), but if you want to use the new AlwaysOn Database Groups you will need an Enterprise license. Not trying to be a dick or anything, but you sound rather in-over-your-head.
|
# ? Dec 31, 2014 22:18 |
|
GnarlyCharlie4u posted:I have another question for you virtualization gurus. This isn't even a virtualization question, more of a MS best practices question and both responses you gave as possible answers are wrong. The correct answer is two completely separate domain controllers (No microsoft cluster bullshit) hardware be damned. Expecting to leverage some VM HA/windows clustering HA is just borrowing trouble. GnarlyCharlie4u posted:On-topic: I have no idea what I'm doing. Tonight I'm going to celebrate the new year. Tomorrow I'll be tackling the virtualization of Windows. loving hell don't shotgun this. Do you have a domain controller already? If yes: 1. Build new server, join the domain. 2. Sort the time options: Google is your friend. Last I read the current practice is install vmtools or what the gently caress ever retarded off brand virtualization you are using, have it pull from the ESX server and have ESX pull from an external NTP source. I use the US Navies . Microsofts is good too. 3. Install active directory/DNS/DHCP/whatever. 4. Seize dem roles 5. Build another new server, steps 1-3 again. 6. Decommission old server. GnarlyCharlie4u posted:I mean setting up a failover cluster for some of our services like SQL, DNS, and the print server. Yes it's that important and holy poo poo I want to set fire to every printer in the building. Microsoft Failover clustering is a big fat loving waste of time and money 95% of the time. DNS will take care of itself (It's designed to if you install it on multiple servers). Print server failover cluster is stupid and dumb. And I admittedly couldn't be arsed to do it for SQL in a virtual environment either considering my Stopped not running state to fully online and serving data is in the sub minute range when virtualized and how daffy Microsoft failover clustering can be. EDIT: Seconding the "In way over your head" thing. And the fact that you aren't even going to do it all with well documented (read licensed) toolsets is just barfy as gently caress. Excuse me while I vmotion some poo poo around just because I can. *right clicks* Rhymenoserous fucked around with this message at 22:39 on Dec 31, 2014 |
# ? Dec 31, 2014 22:25 |
|
Rhymenoserous posted:EDIT: Seconding the "In way over your head" thing. Thirding this. See last page: "I'm just a lowly helpdesk monkey." I'm basically being given a list of wants by my CIO and being told to make it happen. Thanks for the basic sysadmin advice, all. I'm going to start with one and go from there.
|
# ? Dec 31, 2014 23:29 |
|
Rhymenoserous posted:. And I admittedly couldn't be arsed to do it for SQL in a virtual environment either considering my Stopped not running state to fully online and serving data is in the sub minute range when virtualized and how daffy Microsoft failover clustering can be. It significantly increases your maintenance windows for patching though. SQL should be down while you are running the patches for safety and the subsequent reboots will be longer. With a cluster, its simply patch the passive and then fail over into it and fail back if there are issues. Should take less than 30 seconds. If the application layer is robust enough, end users may not even notice. As always, depends on the use case. Our SQL servers aren't really good candidates for virtualization anyways so clustering is a must. One day I would love to have two clusters in two data centers with an availability group between them for more redundancy, but that environment needs a bit more building.
|
# ? Jan 1, 2015 01:11 |
|
GnarlyCharlie4u posted:I mean setting up a failover cluster for some of our services like SQL, DNS, and the print server. Yes it's that important and holy poo poo I want to set fire to every printer in the building.
|
# ? Jan 1, 2015 16:53 |
|
Rhymenoserous posted:2. Sort the time options: Google is your friend. Last I read the current practice is install vmtools or what the gently caress ever retarded off brand virtualization you are using, have it pull from the ESX server and have ESX pull from an external NTP source. I use the US Navies . Microsofts is good too. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1318
|
# ? Jan 1, 2015 17:55 |
|
adorai posted:Based on what I think I know about your organization, you should probably just setup two virtual DCs and let everything else get HA from the virtualization layer. That's what I'll be doing (and in a roundabout way what I was asking in the first place) provided that virtualization goes well. Best part is I could probably just tell my boss that I set up his precious failover cluster and he'd never know. I won't, but I could.
|
# ? Jan 2, 2015 15:27 |
|
Hi, I'm too lazy to read most of the backlog of posts that I missed over the last two weeks of vacation, but I work on both Openstack and oVirt/RHEV, so I'll make a few comments...GnarlyCharlie4u posted:The decision has come down that we will be using OpenStack managed by Foreman. Do you have someone who knows puppet? If not, don't use Foreman. Use ManageIQ. GnarlyCharlie4u posted:No. I'll check out oVirt tonight. RHEV is out of the question though. Every time I show him something that costs any amount of money, he wants all of those features, but for free. GnarlyCharlie4u posted:I'm not. We're talking about a man who migrates mission critical things to the latest Release Candidate of Centos, every time one drops. GnarlyCharlie4u posted:That being said, do you know of any good articles on oVert and Openstack convergence? Glance integration is also a thing. There aren't any really good articles on convergence (because we're not really converging, mostly because oVirt/RHEV use a management framework called vdsm which openstack has no interest in picking up and using), though there are some parts that are getting pulled out. If it were me, I'd probably use ManageIQ and Foreman combined. Then: Install CentOS on two (or however many) nodes. Install oVirt on them. Configure and install the oVirt hosted engine (which runs on the nodes like vSphere). Install the Neutron Virtual Appliance. Configure Neutron as a network provider for oVirt. Either add Keystone to the hosted engine or the Neutron appliance. At which point you can split the remaining services up. Use 2+ nodes for Nova. If you're using a Linux box for storage for oVirt (iscsi, NFS, gluster, whatever), use that for Swift, Cinder, and Glance. Add it to oVirt as a Glance provider. If you're not, run these services on the Nova nodes. Build and import (or download and import) images into Glance for Openstack to use. You can import these into oVirt as templates if you want, or treat oVirt as traditional, non-templated virt. Pretty much it. Unless you're really planning on scaling out into a "cloud" shop (which, as noted, requires a ton of prep), though, I'd probably just use oVirt with Foreman (which also has integration), and use that to create VMs on the fly (as much as you may need them on the fly), maybe with some templates and trivial Python scripts. You can create VMs with the REST API, too, but I haven't actually used it, and it looks pretty ugly for now. But if you use Linux all over the place, you've got Python. Just use that. Here are some examples.
|
# ? Jan 2, 2015 19:00 |
|
I'm using Veeam in an environment and fairly new to the application. There is one proxy with 3 ESXI hosts running version 7 with about 30 VMs. I didn't set this up so I need to do some rip and replacing and upgrade to v8. In the meantime I'm trying to understand what is causing VMWare to take "forever" in removing snapshots after a veeam job "finishes," and what I can focus on to speed up the process. After the veeam job finishes we find that we still have to consoliate the disks afterwards. Running ESXI 5.5U1 if that matters.
|
# ? Jan 4, 2015 02:46 |
|
ghostinmyshell posted:I'm using Veeam in an environment and fairly new to the application. There is one proxy with 3 ESXI hosts running version 7 with about 30 VMs. I didn't set this up so I need to do some rip and replacing and upgrade to v8. Vulture Culture fucked around with this message at 19:13 on Jan 4, 2015 |
# ? Jan 4, 2015 19:08 |
|
So I moved moved a VM from one host to another using Vcenter, updated my Veeam scheduled back up and everything worked great. except now I have two 'branches' of backups for one VM which caused me to run out of disk space on my NAS. Backup Task 1 ---Back up at Old Host 7 retention points ---Back up at New Host 6 retention points When I try to remove the old retention points from disk it fails with an "Failed to flush file buffers" error message then tells me to dig through the log file. Can I just go into my NAS and nuke .vdk and .vib files that I don't care about? or will that corrupt the Veeam database?
|
# ? Jan 5, 2015 14:31 |
|
That should be fine as long as you rescan the backup repository, but you really should open a support case with Veeam so that A) they can make sure it's done the right way and B) they are made aware of their software doing dumb things. Veeam has improved a lot in the past couple of years, but they can't keep improving unless they know what is wrong with their product.
|
# ? Jan 5, 2015 15:02 |
|
|
# ? May 8, 2024 06:50 |
|
Erwin posted:That should be fine as long as you rescan the backup repository, but you really should open a support case with Veeam so that A) they can make sure it's done the right way and B) they are made aware of their software doing dumb things. Veeam has improved a lot in the past couple of years, but they can't keep improving unless they know what is wrong with their product. That seemed to do the trick. Thanks
|
# ? Jan 5, 2015 21:24 |