|
How do you guys prevent pulling of images from docker hub / other default registries in your kubernetes clusters? After searching around I don't really see much helpful information on the topic.
|
# ? Jun 4, 2019 00:08 |
|
|
# ? May 16, 2024 02:55 |
|
We address it indirectly I guess by requiring pull requests to deploy or change anything, and “wtf is this container you’re launching” is part of what reviewers are checking for. This is also obviously a fairly small engineering org where that is viable and not a huge blocker or time sink. But for a technical/automated solution I would look at adding an admission controller that inspects the source of the container and validates it against a whitelist or blacklist. https://kubernetes.io/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/
|
# ? Jun 4, 2019 00:47 |
|
Wild guess: 1) Set up a Squid proxy 2) Use the HTTP{S}_PROXY envvars when running dockerd to point it at the the Squid proxy (export HTTPS_PROXY=http://localhost:squidport). This means any HTTP operations like docker pull should be routed through the Squid proxy. 3) Configure Squid to block any attempt to retrieve from registry-1.docker.io, and/or any domain that looks like an IPv4/v6 address.
|
# ? Jun 4, 2019 00:54 |
|
Admission controller is the way to go.
|
# ? Jun 4, 2019 01:06 |
|
Docjowles posted:We address it indirectly I guess by requiring pull requests to deploy or change anything, and “wtf is this container you’re launching” is part of what reviewers are checking for. This is also obviously a fairly small engineering org where that is viable and not a huge blocker or time sink. Sorry for not being more clear. A webhook admission controller would likely be a decent path but there has to be a service for it to reach out to afik. Are there any popular options out there that provide that back end, or options other than the webhook admission controller?
|
# ? Jun 4, 2019 01:16 |
|
If your k8s cluster is used as a build farm, then an admission controller won't prevent someone from doing a docker build with a FROM pointing to an undesired registry.
|
# ? Jun 4, 2019 01:17 |
|
Yeah ok I see what you’re saying. If you need to prove the “chain of custody” so to speak of a container, maybe you do need to do something awful like insert a proxy heh.
|
# ? Jun 4, 2019 01:47 |
|
Let's say I want to spin up a bunch of VMs to test a code commit. I was thinking there has to be some standard for dynamically getting these things instead of setting aside X amount of nodes to just be always-present to do this. I imagine this can be throttled and such as needed, and the resources can be released when not in use. What are the magic words for this? For what I'm doing, I think I need to use vagrant with Virtualbox due to having some GUIs involved. I'm trying to figure out how to request for this and have the right person on the other end understand it.
|
# ? Jun 7, 2019 20:39 |
|
Usually you would use containers instead of virtual machines for something like that
|
# ? Jun 7, 2019 20:46 |
|
And presumably, you can't just do exactly what you want in AWS? Use Terraform and/or CloudFormation to spin up and tear down the VMs? If the GUI testing is a web interface, then you can do headless testing via Selenium/Chromium, so no VM required.
|
# ? Jun 7, 2019 21:18 |
|
No they're standalone applications. Some of them have GUI automation around them and I want to make sure it all still works. It's definitely unusual so I can't even get mad that keeps coming up. I couldn't use Docker in one hop because of that. I don't know if I could instead, say, boot up a container and run Virtualbox from there. Maybe? I don't think that gives me too much there. I think we have some kind of access to Azure and the question is what are the magic words to pass along through various IT requests to get a fighting chance of somebody instantly recognizing what I'm trying to do here. I was thinking something like "dynamic node requests" or something.
|
# ? Jun 7, 2019 22:53 |
|
Rocko Bonaparte posted:Let's say I want to spin up a bunch of VMs to test a code commit. I was thinking there has to be some standard for dynamically getting these things instead of setting aside X amount of nodes to just be always-present to do this. I imagine this can be throttled and such as needed, and the resources can be released when not in use. What are the magic words for this? For what I'm doing, I think I need to use vagrant with Virtualbox due to having some GUIs involved. I'm trying to figure out how to request for this and have the right person on the other end understand it. I can tell you right now you’re going to hate everything if you try to make this work with Vagrant, because Vagrant has not progressed far from its roots as a tool that a developer uses on their own computer to make some ephemeral VMs. It is full of race conditions and shared state and unpleasant jankiness that will surface constantly in any sort of CI/CD pipeline. If you have any way of using a cloud provider, do that. I ended up hacking together various dreadful scripts a few years ago that let us use an existing Vagrant-based workflow to test infrastructure changes using libvirt, but nobody’s going to celebrate more than me when they are finally put to death.
|
# ? Jun 7, 2019 23:33 |
|
If you really need VM's, use OpenStack! do not use openstack
|
# ? Jun 8, 2019 01:20 |
|
Rocko Bonaparte posted:No they're standalone applications. Some of them have GUI automation around them and I want to make sure it all still works. It's definitely unusual so I can't even get mad that keeps coming up. I couldn't use Docker in one hop because of that. I don't know if I could instead, say, boot up a container and run Virtualbox from there. Maybe? I don't think that gives me too much there. you need azure virtual machines (if you're using teamcity or jenkins or w/e there's probably a way to have it automagically create a vm for you when a build starts), and presumably this is a windows application so you'll need to perform some fuckery to have automatically log in to an interactive session. at that point you'd use whatever your test automation bullshit is to run the tests.
|
# ? Jun 8, 2019 05:19 |
|
I guess I don't really have anywhere else to ask this but I'm running a Node Application on A2Hosting using CentOS. I manage most of it through SSH relatively successfully but I've come across this problem where my screens, which I'm using to host the Node process seem to just randomly crash overnight, causing my web project to obviously crash as well. I figure there must be some way of telling CentOS to make sure to constantly keep this one screen, and one Node process running at all times and to restart it if it crashes. Or maybe there's some sort of screen idling timeout happening which causes it to kill it after a while? I'm a complete loving baby when it comes to Linux so if you're going to explain some sort of solution to this problem to me then you need to do it in a really dumb idiot way.
|
# ? Jun 9, 2019 13:21 |
|
Ape Fist posted:I guess I don't really have anywhere else to ask this but I'm running a Node Application on A2Hosting using CentOS. I manage most of it through SSH relatively successfully but I've come across this problem where my screens, which I'm using to host the Node process seem to just randomly crash overnight, causing my web project to obviously crash as well. I figure there must be some way of telling CentOS to make sure to constantly keep this one screen, and one Node process running at all times and to restart it if it crashes. Or maybe there's some sort of screen idling timeout happening which causes it to kill it after a while? Use this instead of screen https://pm2.io/doc/en/runtime/overview/
|
# ? Jun 9, 2019 13:38 |
|
JHVH-1 posted:Use this instead of screen https://pm2.io/doc/en/runtime/overview/ Importantly I'm not running the app through 'node app.js', it runs through a proprietary command
|
# ? Jun 9, 2019 14:42 |
|
Ape Fist posted:Importantly I'm not running the app through 'node app.js', it runs through a proprietary command Create a aystemd service file with a restart=always entry.
|
# ? Jun 9, 2019 15:26 |
|
ratbert90 posted:Create a aystemd service file with a restart=always entry. Ape Fist posted:I'm a complete loving baby when it comes to Linux so if you're going to explain some sort of solution to this problem to me then you need to do it in a really dumb idiot way.
|
# ? Jun 9, 2019 16:02 |
|
Honestly if I could figure out how the gently caress EnduroJS starts itself or where the entry file is located that'd be fine.
|
# ? Jun 9, 2019 16:05 |
|
I figured out how to get Forever running with it but not before almost bricking my Linux box.
|
# ? Jun 9, 2019 18:13 |
|
chutwig posted:Putting the Fear of The Lord in me. I should just explain a bit more about what I'm trying to do and roughly how I would expect it would happen--without knowing much about how this is actually done. When I get a new code commit, I want to run a QA suite I have as a script on a Windows and two specific Linux environments. I need to also test with Python 2.7 and Python 3.6. Part of the software installs a lot of other software that it wraps and I am testing that integration. The actual installation process for those applications is part of the test so I don't want to deploy them on the images ahead of time. Afterwards, I want to extract the build artifacts from the VMs and post them to our artifact repository. So the procedure will be something like: 1. Spin up VM 2. Inject pending code 3. Run QA suite, which will do unit tests, build on the machine (this is another requirement), run a regression that will install this other software as it goes 4. Extract and post build artifacts. Due to the applications being classic GUI applications and GUI automation is involved in the software I am testing, I don't think I can just use Docker containers. I don't know if I could bring up a container with Virtualbox and then run my stuff in there. I thought I'd use vagrant since it sounded like it could do the insertion and extraction fairly easily to a pile of pre-created VirtualBox VMs. Since this is done reactively to code commits, I figured I could dynamically request resources on which to run these and release them when they're done. Failing that, I'd just have one static setup and pending QA runs will just have to queue up. How I expect this to work: 1. A single core's worth of VM resource--heck, even a shared resource should work--is running our QA agent. 2. The QA agent receives notice of a new commit. 3. It accesses a cloud API to request VM resources per OS/Python permutation it has to run 4. It gives each one a VM image. The lazy thing here would be to have the three test OSes on the same image and just have the QA agent machine tell it which one to actually use for each instance. 5. Part of the launch process would be a call back to the QA agent when these are done about the result. 6. The QA agent would decommission the nodes--possibly after some checks. I don't expect even tens of commits to come through at one time so the master agent doesn't have to be powerful. It can be a static instance that's always on; I imagine it would have to be in order to respond to these requests from the source control management software. uncurable mlady posted:you need azure virtual machines (if you're using teamcity or jenkins or w/e there's probably a way to have it automagically create a vm for you when a build starts), and presumably this is a windows application so you'll need to perform some fuckery to have automatically log in to an interactive session. at that point you'd use whatever your test automation bullshit is to run the tests. The OSes I'm using for testing aren't necessarily the best for a server deployment so I don't think I could just make the VM nodes themselves run the OSes. Rather, I imagine I would bring up some robust, server OS and then subvirtualize a VM for each OS/Python permutation to do what I need to do. I know this metavirtualization thing makes things more complicated.
|
# ? Jun 10, 2019 17:32 |
|
Ape Fist posted:I'm a complete loving baby when it comes to Linux so if you're going to explain some sort of solution to this problem to me then you need to do it in a really dumb idiot way. There are plenty of documents on how to create a .service file.
|
# ? Jun 10, 2019 17:51 |
|
ratbert90 posted:There are plenty of documents on how to create a .service file.
|
# ? Jun 10, 2019 23:40 |
|
man systemd.service, systemd.exec, systemd.unit, systemctl. Systemd has a metric shitton of options for service units and like 99% you don't gaf about. Maybe google for "apache systemd unit" or similar to get an idea of options people actually use when they aren't stroking themselves about stopping state-level actors armed with 0days. Put the file in /etc/systemd/system, do the systemctl incantations, you're done. Yeah it's a lot to take in, yeah it's bewilderingly complex, but at least it's not sysvinit. Oh, and condolences on your future of knowing about linux.
|
# ? Jun 11, 2019 04:15 |
|
Kevin Mitnick P.E. posted:man systemd.service, systemd.exec, systemd.unit, systemctl. Systemd has a metric shitton of options for service units and like 99% you don't gaf about. Maybe google for "apache systemd unit" or similar to get an idea of options people actually use when they aren't stroking themselves about stopping state-level actors armed with 0days. Put the file in /etc/systemd/system, do the systemctl incantations, you're done. I'd argue init scripts are easier. Depending on how you write them systemd will "transcribe" (probably not the right term) them into a service that you can manage via sysctl as well. That being said I do much prefer sysctl, all the useful information in status output alone is enough.
|
# ? Jun 11, 2019 04:29 |
|
https://www.openwall.com/lists/oss-security/2019/01/09/3 systemd owns
|
# ? Jun 11, 2019 04:58 |
|
Rocko Bonaparte posted:The OSes I'm using for testing aren't necessarily the best for a server deployment so I don't think I could just make the VM nodes themselves run the OSes. Rather, I imagine I would bring up some robust, server OS and then subvirtualize a VM for each OS/Python permutation to do what I need to do. I know this metavirtualization thing makes things more complicated. yeah no. you’re not gonna be able to do cute metavirtualization poo poo on public cloud, and it’s going to be a pain in the dick on private cloud. You can make your own windows images using non-server skus if you care, but I doubt it actually matters much and if it does matter then you’re well into the territory of needing to spend $$$ on a test lab and human beings
|
# ? Jun 11, 2019 05:25 |
|
have you tried to get your poo poo working in whatever environment azure gives you? what about this https://azure.microsoft.com/en-us/services/virtual-desktop/ either find a way to make it work with azure or buy some physical hardware and run your goofy operating systems in xen or something
|
# ? Jun 11, 2019 05:47 |
|
uncurable mlady posted:yeah no. you’re not gonna be able to do cute metavirtualization poo poo on public cloud, and it’s going to be a pain in the dick on private cloud. You can make your own windows images using non-server skus if you care, but I doubt it actually matters much and if it does matter then you’re well into the territory of needing to spend $$$ on a test lab and human beings
|
# ? Jun 11, 2019 11:14 |
|
I have some mix of surprised and unsurprised hearing about this. I know I'm trying to do something goofy with VMs. If I was just deploying some kind of service without these OS constraints, then we'd probably already have Docker containers in mind. We actually have a lot of lab infrastructure to house a server, but we didn't want it as some ball and chain to deal with. It would also be more idle than not most of the time, but I guess that's the cost of doing what we're trying to do.
|
# ? Jun 11, 2019 17:04 |
|
PBS posted:I'd argue init scripts are easier. Depending on how you write them systemd will "transcribe" (probably not the right term) them into a service that you can manage via sysctl as well. op wants a service that restarts when it crashes and you're sayin init scripts would be easier. lol I'm sorry your artisanal handcrafted pile of buggy scripts is obsolete. Have you considered retraining?
|
# ? Jun 11, 2019 19:49 |
|
Kevin Mitnick P.E. posted:I'm sorry your artisanal handcrafted pile of buggy scripts is obsolete. Have you considered retraining?
|
# ? Jun 11, 2019 20:09 |
|
Helianthus Annuus posted:have you tried to get your poo poo working in whatever environment azure gives you? I could probably just put my poo poo on Azure and it'd run fine but I like my providers flat price cap.
|
# ? Jun 11, 2019 20:53 |
|
Kevin Mitnick P.E. posted:op wants a service that restarts when it crashes and you're sayin init scripts would be easier. Wasn't really talking about that specific scenario, but flock and a cronjob is pretty easy.
|
# ? Jun 11, 2019 21:33 |
|
PBS posted:Wasn't really talking about that specific scenario, but flock and a cronjob is pretty easy. It is, and also harder and worse than using systemd to handle restarts.
|
# ? Jun 11, 2019 22:10 |
|
Ape Fist posted:I could probably just put my poo poo on Azure and it'd run fine but I like my providers flat price cap. i should have clarified my reply about azure was for Rocko Bonaparte your situation is much more straightforward, you could use PM2 as others have suggested. or configure it as a systemd service. or put it in a docker and run it with a restart policy
|
# ? Jun 11, 2019 22:24 |
|
I'm looking into a storage abstraction for a product so it can run with no code changes in anything from a piddly under-the-table VM with a plain HDD to a MS/AWS/GC environment with that vendor's blob storage. Ideally, it would also handle a "I have a poor man's datacenter, N physical machines running orchestrated containers and some network storage (either a NAS or even each one with their HDD), please replicate my data as much as you are able without giving it to those icky cloud vendors" scenario, which I'm really hoping to avoid but cannot dismiss out-of-hand . Is Ceph what I'm looking for or is there a less overkill solution? I've seen Storidge advertised around which seems simpler, but also very unproven.
|
# ? Jul 1, 2019 17:18 |
|
NihilCredo posted:Is Ceph what I'm looking for or is there a less overkill solution? I've seen Storidge advertised around which seems simpler, but also very unproven. Unless you have a team ready and waiting to support Ceph, I would recommend contacting your local NetApp VAR.
|
# ? Jul 1, 2019 22:39 |
|
|
# ? May 16, 2024 02:55 |
|
chutwig posted:Unless you have a team ready and waiting to support Ceph, I would recommend contacting your local NetApp VAR. Yeah. Don't run ceph yourselves. Our ops team was given a mandate to run ceph. It was a bad time. Very very bad time.
|
# ? Jul 1, 2019 22:57 |