Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Vulture Culture posted:

Shove that poo poo in Vault
I wish every company I've worked for will let even lead engineers deploy whatever they want or that said engineers even have the time to deploy anything new whatsoever.

I think I've lost enough time dealing with Jenkins' problems here that I'll be able to get leadership to let me do something the right way. Most places I've worked are too concerned with losing their (very poorly implemented, costly) investment with Jenkins it's like a battered housewife justifying her abusive husband's behavior.

Adbot
ADBOT LOVES YOU

Hadlock
Nov 9, 2004

Ok I have a kops Kubernetes cluster in AWS. That means it sits behind an ELB load balancer.

I have my network guy who wants to send network device logs to our graylog instance. Most of his devices only allow IP and do not have the option for DNS. This is a non-negotiable networking BS thing.

So, the idea was to setup a virtual IP (elastic IP) and point it at the ELB. Then route all incoming port 53 traffic to kubernetes and program kubernetes cluster to route to the bind container doing it's thing.

Except that you can't bind an EIP/VIP to an ELB in AWS, because gently caress you, that's why.

We're also doing a bind container in the cluster, for reasons. Both problems have the same general solution so I'll talk about that:

One option that has been floated is to spin up a new EC2 instance, run HAProxy on it and then point THAT at the ELB. That feels very crunchy though. I know that DNS instead of static IP solves lots of problems, but we still need raw static IP load balancing for this specific case.

Thoughts? Are there existing Amazon networking tools we can use to glue a static IP to my dynamic kubernetes cluster? Maybe bypassing the ELB somehow. Not sure.

I just really don't want to have to spin up a dedicated HAProxy box for this. Any sort of off the shelf AWS tools would be preferable to managing yet another named pet server.

Hadlock fucked around with this message at 01:29 on May 17, 2018

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

Ok I have a kops Kubernetes cluster in AWS. That means it sits behind an ELB load balancer.

I have my network guy who wants to send network device logs to our graylog instance. Most of his devices only allow IP and do not have the option for DNS. This is a non-negotiable networking BS thing.

So, the idea was to setup a virtual IP (elastic IP) and point it at the ELB. Then route all incoming port 53 traffic to kubernetes and program kubernetes cluster to route to the bind container doing it's thing.

Except that you can't bind an EIP/VIP to an ELB in AWS, because gently caress you, that's why.

We're also doing a bind container in the cluster, for reasons. Both problems have the same general solution so I'll talk about that:

One option that has been floated is to spin up a new EC2 instance, run HAProxy on it and then point THAT at the ELB. That feels very crunchy though. I know that DNS instead of static IP solves lots of problems, but we still need raw static IP load balancing for this specific case.

Thoughts? Are there existing Amazon networking tools we can use to glue a static IP to my dynamic kubernetes cluster? Maybe bypassing the ELB somehow. Not sure.

I just really don't want to have to spin up a dedicated HAProxy box for this. Any sort of off the shelf AWS tools would be preferable to managing yet another named pet server.
You could use an affinity rule to pin the instance to a specific K8s worker node, then expose the service through a NodePort.

Hadlock
Nov 9, 2004

Oh can I just have Kops assign an existing ENI elastic network interface to a random node in my auto scaling group?

Option B is I write some sort of lambda script to poll the status of my ENI and reassign it to another node in the autoscaling group if it's unhealthy? Or can I have the lambda run when my ENI cloudwatch alarm goes off?

Am I over thinking this?

freeasinbeer
Mar 26, 2015

by Fluffdaddy
Network load balancer?

Supports static IPs per subnet.

SeaborneClink
Aug 27, 2010

MAWP... MAWP!

freeasinbeer posted:

Network load balancer?

Supports static IPs per subnet.

How's that going to work out via UDP? :thunk:

Hadlock
Nov 9, 2004

Vulture Culture posted:

You could use an affinity rule to pin the instance to a specific K8s worker node, then expose the service through a NodePort.

Yes but what happens when the node crashes or gets reaped 25 minutes after it boots, or in the middle of the night or whenever whoever set it up goes on vacation etc etc?

Hadlock
Nov 9, 2004

Yeah looked at network load balancer, and yes, one of the ip driven services is UDP. The lambda round robin ENI is looking pretty good at this point.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

Yes but what happens when the node crashes or gets reaped 25 minutes after it boots, or in the middle of the night or whenever whoever set it up goes on vacation etc etc?
If these are ephemeral, another option is to not use affinity, but have something in your pod attach a known elastic IP to the running host. I'm sort of assuming you're running a K8s-first setup and anything not in Kubernetes is considered a snowflake; if not, there's not much argument against having a vanilla EC2 instance or something handle the forwarding to the cluster from a statically-addressed endpoint. Boring is good too.

Vulture Culture fucked around with this message at 13:01 on May 17, 2018

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Hadlock posted:

I have my network guy who wants to send network device logs to our graylog instance. Most of his devices only allow IP and do not have the option for DNS. This is a non-negotiable networking BS thing.

Throw said devices in a trashcan from 1994, where they belong.

spiritual bypass
Feb 19, 2008

Grimey Drawer
Send them to a nearby proxy server that does support DNS? Haproxy or something, idk

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

rt4 posted:

Send them to a nearby proxy server that does support DNS? Haproxy or something, idk
Yeah, I was way overthinking this when I was answering from a red-eye flight, but the right answer is just to throw this poo poo at some local queuing something like rsyslog that's ultimately going to fire the logs at the right place.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Azure VSTS question

I'm slowly trying to bring my dev practises closer to tyool 2018. This week's goal is to get all my common/shared dlls building automatically and publishing to a private nuget repo (all using Visual Studio Online).

I have a new git repo created for the shared projects, and I'm starting with a common one with no other dependencies to make things as simple as possible, but I'm hitting a snag: the dll is signed, and the .pfx is protected by a password. Obviously when building locally this works fine because I've given VS the password. But the Azure build agent doesn't have the password, so I get the following error in the build log:

quote:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(3156,5): Error MSB3325: Cannot import the following key file: blah.pfx. The key file may be password protected. To correct this, try to import the certificate again or manually install the certificate to the Strong Name CSP with the following key container name: VS_KEY_3F96EE9FFF3951D5

I found some snackoverflow posts which aren't entirely helpful either. For example, this one suggests to import the signing cert in a step prior to the build, which I've done similar to this:

code:
$pfxpath = 'path.to.pfx'
$password = 'password'

Add-Type -AssemblyName System.Security
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import($pfxpath, $password, [System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]"PersistKeySet")
$store = new-object system.security.cryptography.X509Certificates.X509Store -argumentlist "MY", CurrentUser
$store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]"ReadWrite")
$store.Add($cert)
$store.Close()

This runs fine, but it doesn't seem to be importing it to the place that the build tool expects to find it, because I still get the error

quote:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(3156,5): Error MSB3325: Cannot import the following key file: blah.pfx. The key file may be password protected. To correct this, try to import the certificate again or manually install the certificate to the Strong Name CSP with the following key container name: VS_KEY_13620F4AB3C1C9B0

I'm guessing I need to modify my powershell script to install the cert into the specified key container name, but this name changes each time, and I don't know how to query that.

So what's the best way to get VSO to build a signed assembly? I don't mind using a non-password-protected .pfx, but then that means I can't check that into git, so I don't know how to get the pfx into the build step without being in the repo. Any ideas?

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
i dealt with that exact same problem; you can't import that pfx in a non-interactive fashion in any way, so your only option is to create a new keyfile (the extension MS uses for this is snk, i believe), change the path in the csproj to point to the snk file, then commit that to vcs.

ed: you can always host the snk file on s3 or the azure equiv. then download it as part of the build if you're unable to check it into source control.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

uncurable mlady posted:

i dealt with that exact same problem; you can't import that pfx in a non-interactive fashion in any way, so your only option is to create a new keyfile (the extension MS uses for this is snk, i believe), change the path in the csproj to point to the snk file, then commit that to vcs.

ed: you can always host the snk file on s3 or the azure equiv. then download it as part of the build if you're unable to check it into source control.

Thanks! That worked perfectly

OWLS!
Sep 17, 2009

by LITERALLY AN ADMIN

Hadlock posted:

Yeah looked at network load balancer, and yes, one of the ip driven services is UDP. The lambda round robin ENI is looking pretty good at this point.

We ended up spooling up a monstrosity involving nginx load balancers, lambdas that rebound EIPs, R53 names, etc, etc.

Janky, but it works.

gently caress UDP traffic, and gently caress amazon NLBs for not supporting it

Hadlock
Nov 9, 2004

OWLS! posted:

We ended up spooling up a monstrosity involving nginx load balancers, lambdas that rebound EIPs, R53 names, etc, etc.

Janky, but it works.

gently caress UDP traffic, and gently caress amazon NLBs for not supporting it

I think we work at the same company as that's the same general solution we came up with yesterday

Found a nice docker container that just does UDP 53 load balancing using nginx, pass in the load balanced hosts as a string as a env var, running it on two hosts; lambda scrapes the autobalancer group ips and then relaunches the container on each docker host with the new env var string.

It's a loving mess.

Also super gently caress aws for not doing UDP load balancing

Internet Explorer
Jun 1, 2005





Doesn't AWS do gaming stuff with Lumberyard? Kind of odd for them not to support UDP load balancing.

tracecomplete
Feb 26, 2017

Lumberyard is at best an afterthought. They got a redistributable license to CryEngine for cheap; I wouldn't read too much further than that.

Methanar
Sep 26, 2013

by the sex ghost
Someone tell me all of your stories about using GCP's container builder CI tool

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Methanar posted:

Someone tell me all of your stories about using GCP's container builder CI tool

I like it but it’s not super polished.

Pros:

it’s cheap as hell, like all our build are sub $15 month and we do at least 30 a workday.
Uses the same style that drone or concourse use for building
I don’t have to _ever_ worry about workers
Outputs to gcr which can do Vuln scanning in alpha on alpine Debian and Ubuntu.
Also gcr images link back to build that created it
They precache the standard builder images on the workers so it can start super fast

Cons:

Unless you are on public github or bitbucket it’s a pain to trigger builds(we use Jenkins right now)
Any special logic around how to tag or push stuff becomes bash scrips that you run in the build process making the yaml a bit narly
It really wants you to use google source repository behind the scenes, even if you are just mirroring bitbucket or github.
You end up writing a lot of your own code to polish the rough edges.



That said I love it way more then Jenkins.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
its very rudimentary but also very solid and economical so I'm sticking with it for now

the gcloud cli support for it is pretty weak, you can't trigger a (remote) build using it, you have to curl the rest api.

however once we got the hang of the container-builder-local plugin for gcloud, we almost never do the remote/cloud build now.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Internet Explorer posted:

Doesn't AWS do gaming stuff with Lumberyard? Kind of odd for them not to support UDP load balancing.

Star Citizen is the only game of note using Lumberyard, and it's not really notable for the right reasons.

OWLS!
Sep 17, 2009

by LITERALLY AN ADMIN

Hadlock posted:

I think we work at the same company as that's the same general solution we came up with yesterday

Found a nice docker container that just does UDP 53 load balancing using nginx, pass in the load balanced hosts as a string as a env var, running it on two hosts; lambda scrapes the autobalancer group ips and then relaunches the container on each docker host with the new env var string.

It's a loving mess.

Also super gently caress aws for not doing UDP load balancing

Heh, not quite. Our solution was to stick the target servers behind internal DNS records, so the nginx balancers always have a pool of servers to choose from, and they only need to be relaunched when the server pool changes in size, not when the hosts themselves are rebuilt, since the internal DNS record doesn't change. (That poo poo isn't quite autoscaled because LEGACY so..)

Hadlock
Nov 9, 2004

Hadlock posted:

Also super gently caress aws for not doing UDP load balancing

I spent like 2 hours today reading through all the kubernetes UDP-related doco, as far as I can tell SREs are allergic to UDP

Someone on Stack Exchange noted back in 2016 that docker's official documentation don't even document how to expose UDP ports when using the docker run command. That comment is still true to this day.

I found a super cool docker image that lets you run a UDP load balancer on a container with very low config:

https://hub.docker.com/r/instantlinux/udp-nginx-proxy/

Going to move it to container linux and have it boot with a launch config in an auto-scaling group of 1. Lambda will update the cloud.config file with the new backends and then nuke the node(s) in the autoscaling group... haven't figured out how to attach our singular ENI to a singular autoscaled node yet.

So far no issues. Waiting on my coworker to get done building our bind container and will put things through the wringer the next couple of weeks, this crazy DNS system will be the lynch pin for our database DR system... should be interesting.

Hadlock fucked around with this message at 10:12 on May 30, 2018

Hughlander
May 11, 2005

Hadlock posted:

I spent like 2 hours today reading through all the kubernetes UDP-related doco, as far as I can tell SREs are allergic to UDP

Someone on Stack Exchange noted back in 2016 that docker's official documentation don't even document how to expose UDP ports when using the docker run command. That comment is still true to this day.

I found a super cool docker image that lets you run a UDP load balancer on a container with very low config:

https://hub.docker.com/r/instantlinux/udp-nginx-proxy/

Going to move it to container linux and have it boot with a launch config in an auto-scaling group of 1. Lambda will update the cloud.config file with the new backends and then nuke the node(s) in the autoscaling group... haven't figured out how to attach our singular ENI to a singular autoscaled node yet.

So far no issues. Waiting on my coworker to get done building our bind container and will put things through the wringer the next couple of weeks, this crazy DNS system will be the lynch pin for our database DR system... should be interesting.

Not sure if it is still true but when I last looked it wasn’t possible to run udp openvpn in docker just tcp which had other issues. But that was a long time ago.

Hadlock
Nov 9, 2004

It definitely works, you need to EXPOSE 53/udp in the docker file and -p 53:53/udp in the docker daemon

Got it working yesterday afternoon, works like a charm

Hadlock
Nov 9, 2004

Also I've been running openvpn in a container on a personal server for 18+ months with no issues, so

Extremely Penetrated
Aug 8, 2004
Hail Spwwttag.
Can I please get some advice from those who have done this before? My org wants to get started with being able to host Windows containers, as well as support some CI for a handful of devs each doing their own thing. Some are on TFS and others on an old version of GitLab, but nobody here has any CI/CD experience. We're 100% on-prem, no butt stuff. There's a need to keep things as simple as possible so that I'm not creating a nightmare for the rest of the Ops team.

My current plan is to do a couple Docker Swarms with Traefik for ingress, and then move all the devs to an upgraded version of GitLab for image repositories and CI jobs. I'd like to make them a sample pipeline to use as a reference, and then make them responsible for their own crap. I'm not sure yet if I should do a build environment or have them build on their workstations and upload to the repository. Does this approach make sense?

I don't have a clear idea of our dev's typical workflow, but they mostly make little .NET webapps with databases on an existing SQL cluster. They manually update UAT/prod by copying files over. Is there anything in my proposed plan that would be a no-go for normal dev work? What should I be asking them or looking for?

Erwin
Feb 17, 2006

Extremely Penetrated posted:

We're 100% on-prem, no butt stuff.
heh

Extremely Penetrated posted:

I'd like to make them a sample pipeline to use as a reference, and then make them responsible for their own crap. I'm not sure yet if I should do a build environment or have them build on their workstations and upload to the repository. Does this approach make sense?
Container images that make it to production should be built within your CI/CD pipeline, not on developer workstations. The developers may need to build images locally for development work, but the workflow should be local development -> push code change only -> some sort automated testing and whatever merge process -> deployment.

Extremely Penetrated posted:

I don't have a clear idea of our dev's typical workflow...what should I be asking them or looking for?
Just sit down with them and watch them deploy a change. Start by automating the minimum amount necessary to keep them from having to manually copy files around. That could just mean the developer checks in code and then goes and clicks a button to copy the files instead of doing it themselves. That's not great, but it's a step in the right direction and it's easier to get buy-in with that than to burn everything down and start over. Work in little steps towards a proper deployment pipeline. Every change you make should be to make the developer's life easier in some way. Find the low hanging fruit first and you'll get buy-in for the more involved stuff down the road..

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Extremely Penetrated posted:

Can I please get some advice from those who have done this before? My org wants to get started with being able to host Windows containers, as well as support some CI for a handful of devs each doing their own thing. Some are on TFS and others on an old version of GitLab, but nobody here has any CI/CD experience. We're 100% on-prem, no butt stuff. There's a need to keep things as simple as possible so that I'm not creating a nightmare for the rest of the Ops team.

My current plan is to do a couple Docker Swarms with Traefik for ingress, and then move all the devs to an upgraded version of GitLab for image repositories and CI jobs. I'd like to make them a sample pipeline to use as a reference, and then make them responsible for their own crap. I'm not sure yet if I should do a build environment or have them build on their workstations and upload to the repository. Does this approach make sense?

I don't have a clear idea of our dev's typical workflow, but they mostly make little .NET webapps with databases on an existing SQL cluster. They manually update UAT/prod by copying files over. Is there anything in my proposed plan that would be a no-go for normal dev work? What should I be asking them or looking for?

Erwin is 100% correct.

One of the major things I do professionally every single day is help teams implement continuous delivery pipelines. If you're not already on Docker, you are putting the cart before the horse in a big way. There's such a thing as "concept overload". You need to make incremental improvements to the existing process over a period of time, otherwise everyone will be unable to maintain, use, or troubleshoot the solution you deliver... except you. The less buy-in you have from the rest of the team and the more foreign it is, the more likely you are to be met with hostility and nay-saying. And in that case, god help you if your solution has a problem.

Also be aware that Windows containers are garbage, I have yet to be able to successfully containerize anything other than trivial, contrived "Hello World" applications with Windows containers.

[edit] Also be aware that unless it's a major priority to implement good test automation practices as part of all of this, the net result of your effort is going to be accelerating the rate at which the team can push bugs into production. I'm making an assumption about their level of testing maturity (nonexistent to low) based on their deployment practices (stone-age), which could be wrong.

But basically doing this right is a long-term project that's a big team effort and requires significant changes to how people do their jobs day-to-day. The devops mantra is "people, process, tools", not "tools, tools, tools".

New Yorp New Yorp fucked around with this message at 18:49 on Jun 7, 2018

crazysim
May 23, 2004
I AM SOOOOO GAY

Extremely Penetrated posted:

We're 100% on-prem, no butt stuff.

Last thing I would expect from "Extremely Penetrated".

Helianthus Annuus
Feb 21, 2006

can i touch your hand
Grimey Drawer
Huh, I didn't know windows containers were a thing. If you can't get the containers to do what you need, I would just use Windows VMs. We had good luck running a bunch of Windows VMs in xen, but this was just for selenium stuff.

I've never done this kind of work for a windows shop, so I'm not sure what you need in your environment to run builds and tests. How much hardware can you throw at this?

If you can automate the build / copy-to-staging process, I think your devs will be very happy. Implementing push-button deploys to staging will get you accolades if your devs are used to copying files around like cave men. Automatic deploys to UAT when devs push commits could be a big win for you too.

Scikar
Nov 20, 2005

5? Seriously?

Windows containers are totally a thing and it's where Nano server ended up (it's now a container-only OS). They just don't really solve anything. The MS container images for things like the Cosmos DB emulator are built off Server Core by default, so they clock in at 1GB for the OS layer. If you are using .NET Framework that's your only option. If you want to use Nano server (and of course you do because 1GB for your OS layer is nuts) then you have to use .NET Core. But .NET Core is cross platform anyway, so at that point you can just use Linux containers, and why wouldn't you because otherwise you're cutting yourself off from 99% of the Docker ecosystem for zero benefit (or else having to run a separate set of Docker hosts).

I haven't really thought about it on a larger scale but I suspect it's less effort to set up a pipeline for .NET Core apps running on Linux containers and then gradually port projects from Framework to Core when you update them, than it is to take your existing full fat Framework apps and get them to play nicely in Server Core containers.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Scikar posted:

Windows containers are totally a thing and it's where Nano server ended up (it's now a container-only OS). They just don't really solve anything. The MS container images for things like the Cosmos DB emulator are built off Server Core by default, so they clock in at 1GB for the OS layer. If you are using .NET Framework that's your only option. If you want to use Nano server (and of course you do because 1GB for your OS layer is nuts) then you have to use .NET Core. But .NET Core is cross platform anyway, so at that point you can just use Linux containers, and why wouldn't you because otherwise you're cutting yourself off from 99% of the Docker ecosystem for zero benefit (or else having to run a separate set of Docker hosts).

I haven't really thought about it on a larger scale but I suspect it's less effort to set up a pipeline for .NET Core apps running on Linux containers and then gradually port projects from Framework to Core when you update them, than it is to take your existing full fat Framework apps and get them to play nicely in Server Core containers.

Yeah, here's an example: You can't install an MSI package in Nano. You can in Server Core, but that doesn't mean that every (or even 'most' or 'many' MSIs) will install successfully.

If you have a CRUD web app that has literally no OS dependencies other than IIS and the .NET framework (and any associated assemblies that your application builds or deploys), there's a pretty good chance it will work in a Windows container. Anything else? Haha, good luck.

Extremely Penetrated
Aug 8, 2004
Hail Spwwttag.
Thanks for the good thoughts, I appreciate it. I'll sit down with each project and get an idea of how they do things, but my expectations are as low as yours. I agree that starting with a whole build pipeline will be too much.

I have to worry about buy-in from both sides here, and the easier sell for Ops is containerizing all these tiny crap apps to cut down on VM sprawl and especially Windows Server licensing costs. Storage-wise, 6.3GB per node for microsoft/windowsservercore is nothing when they're already using dedicated Dev, UAT and Prod VMs (plus backups).

My prototypes so far can definitely confirm that docker on Windows is crap; there's no end of gotchas. Overlay network egress didn't work at all in a mixed Linux/Windows swarm. and docker why did you lower the MTU of the host's external adapter to 1450 but leave the bridged adapter at 1500 when you know MSS announcements don't work between them. Hello semi-randomly reset connections. And your MTU config option does sweet fuckall on Windows. gently caress.

But I think for simple stuff it's pretty feasible. I containerized two ASP.NET 3.5 sites without much hassle, and I've never done this before. I don't know that I'd want to try it with any of the legacy off-the-shelf apps we're hosting though.

Scikar
Nov 20, 2005

5? Seriously?

It sounds like you're on a very different scale to me so take this with a grain of salt, but in my case I have the whole team sold on moving stuff to .Net Core and Docker in the long term, with Octopus Deploy to bridge the gap in the interim. I think it's asking a lot to take a dev from manually copying files to containers in one go, and Docker still has a way to go in ironing out these weird issues that keep cropping up. You don't want to deal with those while you're still trying to sell the team on your architecture design. Octopus is just doing what they already do but with more speed, accountability and reliability so it's much easier to sell. Once people are comfortable with packages at the end of their build pipeline it's much easier to go from that to containers in my opinion.

clutchpuck
Apr 30, 2004
ro-tard
At docker con. Any CI goons here?

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Scikar posted:

It sounds like you're on a very different scale to me so take this with a grain of salt, but in my case I have the whole team sold on moving stuff to .Net Core and Docker in the long term, with Octopus Deploy to bridge the gap in the interim. I think it's asking a lot to take a dev from manually copying files to containers in one go, and Docker still has a way to go in ironing out these weird issues that keep cropping up. You don't want to deal with those while you're still trying to sell the team on your architecture design. Octopus is just doing what they already do but with more speed, accountability and reliability so it's much easier to sell. Once people are comfortable with packages at the end of their build pipeline it's much easier to go from that to containers in my opinion.

Agreed; we have been using Octopus for years now, and we're about to shift everything to containers. We're a mostly Windows/MSFT place so there's some interesting challenges, such as the web apps that use domain authentication on the browser, or transforming web.config files on the fly based on which cluster environment they are in.

Adbot
ADBOT LOVES YOU

FlapYoJacks
Feb 12, 2009
I set up my first Jenkins server today with AWS AMI slaves and it’s pretty slick. Took me 8 hours because I only had passing knowledge of AWS and 0 experience with Jenkins. So far I am very happy with it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply