|
So the architects who designed the DevOps ecosystem I deal with apparently never discussed the matter with the Ops team that owns the environment. We just got told that they won’t integrate puppet agents into server builds and that we couldn’t use Bladelogic anymore to do the deed as they “don’t get involved on the application level”. Meanwhile I have 400 servers that a billion dollar program sure is expecting to have them done very soon. Some poor bastard is going to be having a very bad day soon and I don’t know if it’s going to be the architect or the ops guy.
Warbird fucked around with this message at 03:14 on Oct 9, 2018 |
# ? Oct 8, 2018 23:10 |
|
|
# ? Jun 5, 2024 09:22 |
|
Ansible is agentless, if you can ssh into those servers it's where I would recommend starting. e: or winrm
|
# ? Oct 9, 2018 00:22 |
|
If they have the privileges. Given the described level of disfunction, we can't take that for granted.
|
# ? Oct 9, 2018 01:10 |
|
Suffice it to say we don’t and are federally prohibited from having any. The entire thing is run by a team of less than a dozen and they still create Linux boxes by putting a disk in a tray.
|
# ? Oct 9, 2018 02:21 |
|
Give me the billion dollars. I promise to at least set up PXE boot.
|
# ? Oct 9, 2018 02:45 |
|
Warbird posted:Suffice it to say we don’t and are federally prohibited from having any. The entire thing is run by a team of less than a dozen and they still create Linux boxes by putting a disk in a tray. Are you sure linux is safe maybe you should use HP-UX, the vendor supported os
|
# ? Oct 9, 2018 03:09 |
|
Most of the platforms out there have some form of a hook that can run stuff at boot and almost everything out there has some form of agents to work with stuff at an OS level (part of why security folks get so antsy about all these different layers - it's a ton of places to inject in commands). VMware has agents that can run commands on guest OSes as long as you're running the VMware Guest agent, for example. UserData constructs exist also in OpenStack. PXEBoot and TFTPboot are possible options but if you're getting there, you're now into "we're gonna bootstrap like it's 1999" territory. Just please don't tell me you're required to use Opsware.
|
# ? Oct 9, 2018 03:53 |
|
Warbird posted:the architects who designed the DevOps ecosystem ... apparently never discussed the matter with the Ops team that owns the environment
|
# ? Oct 9, 2018 15:43 |
|
Warbird posted:Suffice it to say we don’t and are federally prohibited from having any. The entire thing is run by a team of less than a dozen and they still create Linux boxes by putting a disk in a tray. You don't actually need root to run ansible, fortunately. You set your ssh user as part of your remote block (https://docs.ansible.com/ansible/2.5/user_guide/intro_getting_started.html#remote-connection-information), which by default just runs with whatever OpenSSH config you already have. Ability to SSH to the machine is the only hard requirement, for linux boxes anyway.
|
# ? Oct 9, 2018 17:09 |
|
Good to know going forward. However these are all Win2012 boxes and they told us to pound sand when Ansible or anything else was brought up. I’m fairly sure Puppet adoption was only tolerated due to C suite strong arming. Is there a Krebsonsecurity equivalent for DevOps/Ops that I should know about? Preferably something at a 5th grade reading level.
|
# ? Oct 9, 2018 17:39 |
|
Is there something lightweight I can use to manage 3-4 mac minis? These are build servers, I need to keep the software environment consistent, but it needs to be saving me enough time that the investment in time/effort learning a new tool, migrating to it, and the maintenance is worthwhile. That's an easy bar to meet because I just got bit with an outdated version of Python but I mean at this point a checklist on a piece of paper may be the way to go.
|
# ? Oct 10, 2018 21:38 |
|
I’d use ansible. No clients to setup, just ssh keys. Works with yaml files so not too difficult to manage either.
|
# ? Oct 10, 2018 21:53 |
|
I used to manage 1,000 Mac Minis with Chef and it was a decently pleasant experience, but for what you're trying to do you're unlikely to hit any of the strengths and weaknesses of Chef, Puppet, or Ansible, so you might as well just figure out which one's syntax you like the best
|
# ? Oct 11, 2018 04:12 |
|
*Macs Mini
|
# ? Oct 11, 2018 06:42 |
|
+1 for Ansible, it works great for just about anything with SSH daemon (or WinRM for the windows machines).
|
# ? Oct 11, 2018 10:36 |
|
The only thing I'd mention is that by default ansible runs on a push model, so if you need this automation to keep the environment consistent because folks are making changes, you should be aware that you are going to have to re-run a playbook every time you want to enforce this desired state. There are some approaches to this problem (AWX, ansible-pull), but they're kind of heavyweight for your use case IMHO.
|
# ? Oct 11, 2018 18:50 |
|
Are you guys using 2FA on any of your internal services behind the firewall? We are rolling out LDAP to our internal services, but have the option to integrate U2F in a couple of them.
|
# ? Oct 11, 2018 19:00 |
|
Warbird posted:Good to know going forward. However these are all Win2012 boxes and they told us to pound sand when Ansible or anything else was brought up. I’m fairly sure Puppet adoption was only tolerated due to C suite strong arming. Following up on this, it was all a "misunderstanding" once people a few pay grades higher got wind of things and made calls. We're all still busted af, but it's the normal sorta busted.
|
# ? Oct 11, 2018 19:17 |
|
Hadlock posted:Are you guys using 2FA on any of your internal services behind the firewall? We are rolling out LDAP to our internal services, but have the option to integrate U2F in a couple of them. Every one of our instances has Duo requiring 2fa for ssh access. Definitely do it.
|
# ? Oct 11, 2018 23:07 |
|
We're moving to using LDAP + SSO / MFA via JumpCloud for all production environments' instances and because our developers still ssh into instances, those accounts will still get LDAP but won't be held to the same stringent standards. Before my tenure, our freakin' VPN instance was put in the same AWS account that developers roam around on all day with AWS *:* permissions (that we have been unable to revoke due to God Almighty Jenkins being so important and releases tied to random AWS users that defy auditability) so all of this may be borderline security theater.
|
# ? Oct 12, 2018 00:05 |
|
necrobobsledder posted:We're moving to using LDAP + SSO / MFA via JumpCloud for all production environments' instances and because our developers still ssh into instances, those accounts will still get LDAP but won't be held to the same stringent standards. Before my tenure, our freakin' VPN instance was put in the same AWS account that developers roam around on all day with AWS *:* permissions (that we have been unable to revoke due to God Almighty Jenkins being so important and releases tied to random AWS users that defy auditability) so all of this may be borderline security theater. It's eerie how some of these posts mirror my own reality and make me suspicious who on here is my coworker.
|
# ? Oct 12, 2018 01:00 |
|
going through keysets and seeing ones named things like "service-account". what service? where did this come from?
|
# ? Oct 14, 2018 02:47 |
|
Stringent posted:*Macs Mini thanks
|
# ? Oct 14, 2018 05:44 |
|
Is this an appropriate place for beginner Docker questions? I'm using Docker for Windows and Windows containers. I'm using a slightly modified version of this Dockerfile (https://github.com/SharpSeeEr/Dockerfiles/blob/master/Elasticsearch/Dockerfile) to build an image for ElasticSearch on Windows server. The file appears to define a volume but whenever I add some data to ES, stop and remove the container, then re-run it the data is gone. My internet reading suggests this is expected (data retains if container is stopped and re-started without removing), but how do I make it not do that? Or am I approaching this the wrong way? Let's say I eventually have a production environment that is running a container based on that image, but I decide I want to alter the image but keep the existing data when making a new container from the newly-altered image. Am I not expected to be making changes to images at the point that I depend on the existing data in the container? Do I just make a data backup that I load back into the container when it starts? This is all new to me and I don't think I understand the way I should be dealing with this situation.
|
# ? Oct 17, 2018 17:38 |
|
Opulent Ceremony posted:Is this an appropriate place for beginner Docker questions? I'm using Docker for Windows and Windows containers. Best bet is to read the Docker docs on volumes: https://docs.docker.com/storage/volumes/#start-a-container-with-a-volume
|
# ? Oct 17, 2018 17:51 |
|
Vulture Culture posted:The VOLUME instruction is an image option, not a container option—it sets a very subtle behavior on containers created from the image, but it can't itself make Docker mount a volume at that path when you run a container based on the image. To mount a volume, you need to specify the -v/--volume argument to docker run. Thank you! I didn't realize the Dockerfile IMAGE instruction was different. I added a named volume to my docker-compose and added the volumes config for the service that uses that image to use the named volume at the directory noted in the Dockerfile and now my ES data persists between containers.
|
# ? Oct 17, 2018 18:46 |
|
The best advice I can give if you are starting on Windows containers is to not use Windows containers if at all possible. The only thing I've run into so far that only works in Windows containers is the Cosmos DB Emulator, but there's loads of software with official Docker containers that are only provided for Linux. You can always roll your own if you have to, but a lot of the benefit of Docker is that someone else already did the hard work for you. So unless you're really desperate to run a legacy .Net application in Docker I would take the easy route.
|
# ? Oct 17, 2018 22:44 |
|
Scikar posted:So unless you're really desperate to run a legacy .Net application in Docker I would take the easy route. I appreciate your thoughts but this is my goal. Docker for Windows also appears to have issues running Linux and Windows containers side-by-side so everything else has to be a Windows container too.
|
# ? Oct 17, 2018 23:29 |
|
If you get it to work, please post your notes and experiences here. I've yet to hear of anyone get Windows containers to actually work so good luck with your endeavors. Linux is pretty well baked at this point but I have yet to see a good writeup of a functional Windows container system. I haven't really been looking lately though. Good luck sir
|
# ? Oct 17, 2018 23:56 |
|
Opulent Ceremony posted:I appreciate your thoughts but this is my goal. Docker for Windows also appears to have issues running Linux and Windows containers side-by-side so everything else has to be a Windows container too. This will theoretically not be an issue in the near future. Docker had a PoC demo in a session at ignite running Windows and Linux containers side by side.
|
# ? Oct 17, 2018 23:58 |
|
Hadlock posted:If you get it to work, please post your notes and experiences here. I did this last spring. It was awful so I wrote a guide on how I did it. https://github.com/mooseracer/WindowsDocker
|
# ? Oct 18, 2018 01:54 |
|
Windows on containers is so busted even microsoft threw up their hands and recommends porting to your code to .net core and running it on linux for microservicesExtremely Penetrated posted:I did this last spring. It was awful so I wrote a guide on how I did it. https://github.com/mooseracer/WindowsDocker Bhodi fucked around with this message at 04:29 on Oct 18, 2018 |
# ? Oct 18, 2018 04:26 |
|
yes but it looks good on the resume My main complaints are about Swarm and the overlay network. The Windows hosts are so unreliable I had to build swarm-rejoining scripts. I feel like uptime would be better if there was no cluster, just a single manager. But it did improve with patching as I was building it out. Started with Server 1709 and Docker EE 17.06-06 and it barely worked at all. Maybe someday this will be a viable option for hosting a containerized version of your bullshit mission critical legacy ASP app until the end of time.
|
# ? Oct 18, 2018 18:26 |
|
Which’s epliyment tools work with apps that are written in different languages? Manege a php and python app deployment from one source.
|
# ? Oct 23, 2018 17:37 |
|
All of them? I would put the python code and the php code in separate repos, but you should otherwise not have any issues with your deployment tool of choice.
|
# ? Oct 23, 2018 17:56 |
|
The Fool posted:All of them?
|
# ? Oct 23, 2018 17:56 |
|
I'm a bit new to the whole DevOps world, coming from being a generalist Windows sysadmin, but I've been put on an Azure project to help deal with the infrastructure side of things. This is something I'm really interested in learning. The tools our dev teams currently use: SVN for source control Bamboo as a build server Octopus Deploy (and some homegrown tools) for deployment What I am aware of: We should use ARM templates to define our resource groups (and the resources in them) We should keep our arm templates in source control This project will be using mostly Azure solutions, like API Management, Service Bus, App Services, CosmoDB (No IaaS as far as I am currently aware) What I'd like to find out: The best way (best practices in general lol) to put together ARM templates, whether through VSCode or getting a Visual Studio license since it has those capabilities Again, general structure? Do we throw everything from the RG into one template? Linked/nested templates? How are deploying new app service releases handled in this situation? Is there anything to not define in an ARM template? How to handle dev/test/prod environments in Azure? Do I use something like variable substitution in Octopus to separate these out? Is this even recommended in Azure? Getting the template from the editor over to Octopus? I imagine I can use bamboo to 'build' the package after an update has been checked in (or at least zip it up) so Octopus can process it for deployment. Spring Heeled Jack fucked around with this message at 15:03 on Oct 24, 2018 |
# ? Oct 24, 2018 13:51 |
|
Avoid ARM templates. Look at Terraform (if you have to; its specification language sucks wind) or preferably Pulumi (Terraform providers with a specification layer not made by people who fear loops--former Microsoft devdiv folks, mostly) so long as a little, simple code won't freak out people you're working with. When you have actual code constructs instead of ARM JSON splatters it's a lot easier to figure out how you want to modularize your code, because it now actually is code, and most of it becomes a matter of taste. tracecomplete fucked around with this message at 14:31 on Oct 24, 2018 |
# ? Oct 24, 2018 14:28 |
|
AFashionableHat posted:Avoid ARM templates. Look at Terraform (if you have to; its specification language sucks wind) or preferably Pulumi (Terraform providers with a specification layer not made by people who fear loops--former Microsoft devdiv folks, mostly) so long as a little, simple code won't freak out people you're working with. I had looked at and played around with Terraform previously. I would assume MS would want people to use its native platform, but I read something today about MS steering people towards Terraform for this as well.
|
# ? Oct 24, 2018 19:24 |
|
|
# ? Jun 5, 2024 09:22 |
|
Lately MS seems to be doing a decent job of identifying tools the community prefers to use and then putting support behind those tools.
|
# ? Oct 24, 2018 19:35 |