|
is vagrant more than a convenience layer on top of your typical hypervisors (vmware, virtualbox, qemu)?
|
# ? Nov 27, 2018 02:40 |
|
|
# ? May 16, 2024 19:13 |
|
Think of Vagrant like a shittier Terraform but more for your local virtualbox/hyperv or whatever. I hardly use Vagrant at all anymore since Docker accomplishes the same thing.
|
# ? Nov 27, 2018 03:02 |
|
Docjowles posted:Current re:Invent status: Waiting in an hour long line to even register for the conference. Going to miss my first session. No food or coffee because everyplace that serves those also has an hour long line. Lol I waited for 3 minutes at the airport to get my badge.
|
# ? Nov 27, 2018 04:52 |
|
Bhodi posted:it's pretty nuts. I got in last night at 2am and got up early because I knew badge line was gonna be long Blinkz0rz posted:Lol I waited for 3 minutes at the airport to get my badge. Yeah I was aware of the airport option but we landed on Sunday after checkin had closed for the day. I'm just super annoyed that the Venetian had no line at all for registration, was also the only place you could get your hoodie and other misc swag, and had coffee available with no line. Whereas 90% of attendees were apparently at the Aria which was staffed by like 2 people. But whatever, I got through it and saw some cool sessions. I finally got food and caffeine at like 1PM Work then paid for a lot of good food and booze and I have gotten over myself. Regarding the busses: don't lol. Schedule sessions within walking distance of each other. If you can't, enjoy the break. Do not use the bus, it only ends in sadness. This was true last year and it's not gonna get better with an additional 10k attendees or whatever. With the addition of the Bellagio as a venue, and pretending the MGM doesn't exist, I can pretty easily set up a walkable schedule for every day. Edit: rereading that, this conference must be a complete nightmare for anyone with a physical handicap, and that sucks Docjowles fucked around with this message at 08:58 on Nov 27, 2018 |
# ? Nov 27, 2018 08:53 |
|
Gyshall posted:Think of Vagrant like a shittier Terraform but more for your local virtualbox/hyperv or whatever. Same, Vagrant are just a larger/slower alternative to docker containers for me at this point.
|
# ? Nov 28, 2018 21:45 |
|
Vagrant fuckin owned when it first came out, but yeah, it's mostly been obsoleted at this point by better things.
|
# ? Nov 29, 2018 00:28 |
|
i dont know why people would go to reinvent to learn things, its a vendor conference
|
# ? Nov 29, 2018 00:50 |
|
joiners gotta join
|
# ? Nov 29, 2018 02:44 |
|
dont blame me, i'm going to kubecon
|
# ? Nov 29, 2018 02:48 |
|
The best part about reinvent is ending up drinking on someone else's dollar in a private lounge. The conference itself is secondary.
|
# ? Nov 29, 2018 07:09 |
|
The best part of when I went to Velocity NYC last year was wandering around times square at 2 am with some British Fastly engineers after getting kicked out of the bar
|
# ? Nov 29, 2018 08:44 |
|
Blinkz0rz posted:The best part about
|
# ? Nov 29, 2018 14:50 |
|
Gyshall posted:Think of Vagrant like a shittier Terraform but more for your local virtualbox/hyperv or whatever. Methanar posted:The best part of when I went to Velocity NYC last year was wandering around times square at 2 am with some British Fastly engineers after getting kicked out of the bar Vulture Culture fucked around with this message at 15:50 on Nov 29, 2018 |
# ? Nov 29, 2018 15:47 |
|
you're probably thinking of the australian one
|
# ? Nov 29, 2018 17:50 |
|
Anyone else heading to/at DevopsCon in Munich? I am going even though my experience can best be described as being the dude who fixes the Jenkins, but I am hoping there will be some interesting stuff. It unfortunately looks like most of the entry level developer talks are in German, which strikes me as bizarre for an international conference, but there you go.
|
# ? Dec 3, 2018 08:54 |
|
Biomute posted:Anyone else heading to/at DevopsCon in Munich? I am going even though my experience can best be described as being the dude who fixes the Jenkins, but I am hoping there will be some interesting stuff. It unfortunately looks like most of the entry level developer talks are in German, which strikes me as bizarre for an international conference, but there you go. I guess it would make sense in that if you're willing to travel for a devops conference, you're probably not entry level.
|
# ? Dec 3, 2018 18:37 |
|
Is there a better place to ask Docker questions than this thread? I'm trying to pilot a hybrid swarm with a Linux and Windows node and I am running into weirdness with the ingress mesh networking. Both are running docker 18.09.0. I have deployed a simple stack with 4 services on the windows worker, which are accessible from their published ports on the Windows worker only. My understanding of the mesh networking is that these published ports should be accessible from the Linux node as well, as they will just route traffic internally to the correct host. I also have a service on the linux host (swarmpit) that is not accessible from the Windows host on the published port. Am I missing something with this setup? It looks like ingress mesh with swarm did not use to work on Windows at some point, but has been functioning since version 1709. Stack in question: https://pastebin.com/TqjKx97t Spring Heeled Jack fucked around with this message at 15:11 on Dec 7, 2018 |
# ? Dec 6, 2018 21:25 |
|
there are few dirtier words than "hybrid". its the "have your cake and eat it too" of tech.
|
# ? Dec 8, 2018 14:24 |
|
What is the learning curve like on Spinnaker vs Jenkins? Our biggest all from engineering is setting up a build and deploy per branch. Nothing too exotic, Java/Grails app monolith using Maven/ant, starting to break out things as separate services and microservices.
|
# ? Dec 8, 2018 22:41 |
|
They fill two different purposes. Spinnaker is a deployment tool and Jenkins is a build tool. They can definitely be configured to work together and Jenkins can do some deployment stuff but you're better off looking at them separately.
|
# ? Dec 8, 2018 22:59 |
|
StabbinHobo posted:there are few dirtier words than "hybrid". its the "have your cake and eat it too" of tech. Yeah I opened an issue on the docker for win git page and got acknowledgement that nothing is really wrong with my setup and it ‘should work’ but Windows containers are really just a crapshoot in this case.
|
# ? Dec 9, 2018 04:53 |
|
Hadlock posted:What is the learning curve like on Spinnaker vs Jenkins? Spinnaker kills me because it's a great UI for deployment, but unless we just have a terrible implementation where I work, the API isn't versatile enough to start automating new deployments. Clicking through a UI just doesn't scale.
|
# ? Dec 9, 2018 05:47 |
|
Spinnaker is also flaky as hell, I really can’t warn people to stay away enough. Depending on what you are doing I like keel or flux, and Argocd looks interesting as intuit is basically rewriting spinnaker because they didn’t like it. There is an api though. And spinnaker vs Jenkins, spinnaker uses Jenkins to execute most jobs so no matter what you are gonna end up running Jenkins.
|
# ? Dec 9, 2018 05:55 |
|
For what it's worth, every shop I've ever heard of actually running Spinnaker has an engineering organization led by someone who used to work for Netflix. It's a very complicated piece of software, especially if you're relying on a container orchestration platform to run your apps instead of baking AMIs, which is one niche where it doesn't have a ton of competition.
|
# ? Dec 9, 2018 18:43 |
|
We had one team use Spinnaker and its clunky and very slow, and we are a .NET/Windows company which doesn't fit as nicely into some of the products or practices available. Fortunately something magical happened last week: our infrastructure team obliterated the Spinnaker server because it had a "QA" tag and deleted all the backups as well. So right now that one team is being ported into ECS as the first goal in our "move all poo poo to containers" strategy. Edit: We're probably going to restore Spinnaker but make it more ECS focused than huge Windows AMI focused. Cancelbot fucked around with this message at 17:20 on Dec 11, 2018 |
# ? Dec 11, 2018 10:51 |
|
Does anyone else use linters when writing scripts? Am I the only one? VS Code has a linter for just about everything, including shellcheck/bash I just imported a coworkers' 197-line build script, and the linter immediately picked up 52 issues, plus who knows how many trailing whitespace issues etc. I'm used to seeing 2-3 issues and a handful of linter ignore items. I think vi/m is great for monkey patching poo poo, but holy hell code written there looks like rear end if you need to modify it later in a GUI-ful editor.
|
# ? Dec 12, 2018 01:20 |
|
Lint or die trying Terraform fmt or die trying
|
# ? Dec 12, 2018 02:18 |
|
We have some extremely aggressive ESLint settings because Javascript is a foot cannon without it
|
# ? Dec 12, 2018 02:30 |
|
Gyshall posted:Lint or die trying I'm always conflicted when starting at a new client whether to turn off format on save so I can just commit a drat change or die on that hill. When possible I add a terraform fmt to their pipeline and fail PRs if anything needs to be formatted.
|
# ? Dec 12, 2018 02:33 |
|
Vulture Culture posted:We have some extremely aggressive ESLint settings because Javascript is a foot cannon without it use TypeScript and then aggressively lint the hell out of it too
|
# ? Dec 12, 2018 06:21 |
|
How do you guys do continuous prod deployments to systems that have message queue based communication and handle heterogeneous application component versions consuming from shared queues? In a synchronous processing world that'd be an endpoint handled by different versions of the service. We have a web request frontend, clients upload large artifacts separately (S3, their own hosting service, etc.), reference them in their API request, and processing is picked up asynchronously via SQS queues serialized as < 1 KB XML messages across several upstream services that self-report the status of their tasks to the primary Aurora MySQL DB. I'm trying to setup an architecture in AWS using a canary / blue-green approach using environment-specific SQS queues, load balancers, and instances but shared data stores like S3 buckets and DBs. DB updates to apps will be done by mutating their views, not by changing the actual underlying DB structures (the latency hit isn't measurable for us in tests so far). This would allow us to make a bunch of changes in production as necessary, cherry pick messages from queues to run through a deployment candidate's queues, and rollback changes faster than we do now (a deployment process straight out of 1995 but in AWS and with 90% of our services that can't be shut down on demand without losing customer data, which really, really, really is a pain in the rear end)
|
# ? Dec 12, 2018 16:56 |
|
necrobobsledder posted:How do you guys do continuous prod deployments to systems that have message queue based communication and handle heterogeneous application component versions consuming from shared queues? In a synchronous processing world that'd be an endpoint handled by different versions of the service. We have a web request frontend, clients upload large artifacts separately (S3, their own hosting service, etc.), reference them in their API request, and processing is picked up asynchronously via SQS queues serialized as < 1 KB XML messages across several upstream services that self-report the status of their tasks to the primary Aurora MySQL DB. I'm trying to setup an architecture in AWS using a canary / blue-green approach using environment-specific SQS queues, load balancers, and instances but shared data stores like S3 buckets and DBs. DB updates to apps will be done by mutating their views, not by changing the actual underlying DB structures (the latency hit isn't measurable for us in tests so far). This would allow us to make a bunch of changes in production as necessary, cherry pick messages from queues to run through a deployment candidate's queues, and rollback changes faster than we do now (a deployment process straight out of 1995 but in AWS and with 90% of our services that can't be shut down on demand without losing customer data, which really, really, really is a pain in the rear end)
|
# ? Dec 13, 2018 03:54 |
|
Bhodi posted:LOL, we don't, one of our primary apps uses zookeeper which is a mid aughts piece of poo poo that requires the config file to have all the other nodes listed in the config file on startup and is brittle as hell, absolutely not designed for cloud or containers and because everything relies on it, we have to bounce the entire thing every single time. There's no wedging some legacy apps into the blue/green canary deployment framework no matter how much you want to. They'd take a complete rewrite. people will ask for this kind of thing all the time without grasping the scope of the effort needed to make it happen usually comes down to "do you want to stick to the runtime and release schedule this thing was designed for, or do you wanna pay a team of devs to rewrite it?"
|
# ? Dec 13, 2018 04:49 |
|
Bhodi posted:LOL, we don't, one of our primary apps uses zookeeper which is a mid aughts piece of poo poo that requires the config file to have all the other nodes listed in the config file on startup and is brittle as hell, absolutely not designed for cloud or containers and because everything relies on it, we have to bounce the entire thing every single time. There's no wedging some legacy apps into the blue/green canary deployment framework no matter how much you want to. They'd take a complete rewrite.
|
# ? Dec 13, 2018 05:12 |
|
necrobobsledder posted:How do you guys do continuous prod deployments to systems that have message queue based communication and handle heterogeneous application component versions consuming from shared queues? In a synchronous processing world that'd be an endpoint handled by different versions of the service. We have a web request frontend, clients upload large artifacts separately (S3, their own hosting service, etc.), reference them in their API request, and processing is picked up asynchronously via SQS queues serialized as < 1 KB XML messages across several upstream services that self-report the status of their tasks to the primary Aurora MySQL DB. I'm trying to setup an architecture in AWS using a canary / blue-green approach using environment-specific SQS queues, load balancers, and instances but shared data stores like S3 buckets and DBs. DB updates to apps will be done by mutating their views, not by changing the actual underlying DB structures (the latency hit isn't measurable for us in tests so far). This would allow us to make a bunch of changes in production as necessary, cherry pick messages from queues to run through a deployment candidate's queues, and rollback changes faster than we do now (a deployment process straight out of 1995 but in AWS and with 90% of our services that can't be shut down on demand without losing customer data, which really, really, really is a pain in the rear end) jury is still out on if thats a good thing (i lean yes). sorry that doesn't really help you though because "rewrite everything to upgrade from rmq to kafka" is about as helpful as "install linux problem solved".
|
# ? Dec 13, 2018 05:33 |
|
Exhibitor is pretty nice and, while not perfect, is pretty much the only way to run zookeeper IMO.
|
# ? Dec 13, 2018 08:32 |
|
Zookeeper 3.5 seems to have been in prerelease since dinosaurs roamed the earth so I am not holding my breath for that one. (We still run 3.5 in production lol because it actually has functional support for certificate authentication)
|
# ? Dec 13, 2018 14:41 |
|
Bhodi posted:LOL, we don't, one of our primary apps uses zookeeper which is a mid aughts piece of poo poo that requires the config file to have all the other nodes listed in the config file on startup and is brittle as hell, absolutely not designed for cloud or containers and because everything relies on it, we have to bounce the entire thing every single time. There's no wedging some legacy apps into the blue/green canary deployment framework no matter how much you want to. They'd take a complete rewrite. I was asking about CD for apps that would in theory use Zookeeper and how to deal with two different apps using a known shared topic in a fan-out MQ configuration. More importantly, I'm just trying to figure out how to make some form of automated deployments not completely suck half your work hours as a very expensive deployment bitch when you really, really don't trust your application developers to not blow production SLAs with a regression on every commit. The approach I'm advocating is to have a deployment candidate re-deploy to the non-live production variant and we give it no traffic and keep it from ever touching a shared queue and drip-feed it messages. This is kinda why I was asking about service discovery way back - we could also point or setup circuit breakers for live-running applications to new SQS queues to support deployments that shift traffic from one queue to another, but it looks like changing running software configuration is just not happening, so it's back to config file edits, restarting, and so forth. Oh, and all the while making sure we don't do a restart around the top of the hour. It's like I'm herding a farm of mogwai. Worst part is that most of this software was written from 2012 - 2014 so they should have known better. But this is what happens with founders that haven't worked at decent software shops despite being smart otherwise.
|
# ? Dec 13, 2018 15:30 |
|
We had a proxy that sent live traffic to the prod deployments and a copy to the candidate, and the candidate's responses were blackholed. Then we just watched error rates on the candidate to test its worthiness.
|
# ? Dec 13, 2018 16:28 |
|
|
# ? May 16, 2024 19:13 |
|
With AWS load balancers we can’t mirror requests like if we were running nginx or haproxy. Heck, I’d take iptables. But goreplay seems like a tool worth looking at with some modifications for our needs. We have some PII issues not to mention GDPR that might make this all a no-go though. I’d say spending my time writing some data masking tooling and shoving requests to our numerous test environments would be more worthwhile than trying to do something that can’t be done by design.
|
# ? Dec 13, 2018 17:52 |