Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Space Whale posted:

So now build, msbuild, and nuget. But in what order?

Is there a way to trigger a build from the web interface or do I have to push a comment commit to git and wait a minute every time? :effort:

NuGet package restore can be configured to run automatically as part of the build process (it modifies the *.*proj files to include a reference to an MSBuild target). If you're talking about publishing binaries into nuget, that's a different story.

Also, you should need minimal-to-zero MSBuild fuckery to build. Proj files are msbuild files that already know how to build everything. The goal of your build system should be to get you deployable binaries. Once you have those, you're done. If you try to write release scripts in MSBuild, you will be miserable and hate life. Trust me on this.

New Yorp New Yorp fucked around with this message at 20:29 on Jan 30, 2015

Adbot
ADBOT LOVES YOU

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
Is it just me, or is Chef loving awful? I decided to exit my Windows comfort zone a bit and play with Chef this weekend, and it has not been a happy, smooth experience. Doing everything requires endless dicking around with config files and googling, and no one seems to have any clear instructions on how to set up and use the thing properly.

I got it to the point where I can invoke a really simple cookbook on a target server (literally babby's first cookbook), but it's just been an arduous, painful process. I'm dreading figuring out how to manage cookbook dependencies and write some sort of useful recipe.

Also, the way they named things is super irritating. KNIFE RECIPE COOKBOOK KITCHEN. I'm surprised they call servers "nodes" instead of "POTS AND PANS LOL GET IT COOKING IDIOMS"

Puppet is up next on my list of things to play with... am I going to be just as miserable setting it up and using it, or is it better?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

beuges posted:

What I want is for all the nightly builds/deploys to update the dev environment automatically. Then, once we're ready to move something into test, just change the deployment URL's and credentials and initiate an on-demand build, and have everything update the test environment as well.

This is not a good practice, IMO. You should be building a set of binaries, testing them against a dev/integration environment, then promoting those binaries to the next environment in the release path. There are tools out there to help you manage releases like this. Overextending the build process to deploy software is really painful and inflexible.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

bgreman posted:

We're a C#/ASP.NET/SQL shop that builds both an enterprise-level website and a related desktop application. When I first got here, it was all CruiseControl.NET driving MSBuild from TFS. At some point, my bosses were like, "Hey let's go full Microsoft!" and so I ported the build syetem to use TeamBuild 2010 in ways that Microsoft probably wasn't intending for that release. Needless to say, I've become quite practiced at customizing these drat .xaml workflow templates and doing eldritch things with build definitions.

We've also gone through about four different methods for managing our configurations, including

  • Hardcoding it into the build workflows (driven by environment-specific values coming from external .xml files), meaning environment-specific configuration happened at build time, but the build finishes with a build deployable on all targeted environments -- BAD!
  • Writing our own tool to perform xpathreplace-based configuration post-build. This meant we could configure for an environment we didn't anticipate when we ran the build. -- Slightly less BAD!
  • Just checking all the config files for all the environments right into TFS. - Less bad, but maintenance heavy.
  • Web/app.config transforms -- Our next step, only small portions of the application have been modified to use this technique, but it seems pretty flexible and cuts down on config maintenance.

Meanwhile, another group has spun off here that is going full Java/node.js/Mongo/Rabbit, etc. They're using TeamCity and Gitlab and having just an awful time of it.

I kind of miss doing actual dev, but being "the build guy" is kind of a nice thing to have on my resume. Just wish TeamBuild was in more demand.

Team build is getting a full rewrite in 2015 that will be much less awful. Check Chris Patterson's blog on MSDN.

you shouldn't have your build responsible for deployment in general. Config transform encourages bad practices like one build per environment instead of a binary promotion model.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
It's worth taking a look at Visual Studio Online if you're already in the Microsoft world. That will cover your source control and build, and they'll be adding a redesigned release/deployment experience later this year.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

syphon posted:

One of the biggest challenges I see from moving teams into a CI/CD model is the concept of "Every check-in must be capable of shipping all the way to release". If they've been doing it for a long time, people get way too used to the concept of "I can always check-in a fix later" and break the build or commit their half-written code. The more devs you have working in this mindset, the longer your build/deploy/tests will be broken, people will be blocked, and you're not releasing software. The idea is to set up your branch plan so that it allows people to commit frequently, but ALSO not commit junk to the Trunk branch and break everyone else (Github is really good at this by default).

The other big challenge is to get people to start using feature flags and short-lived dev branches so you can ship your code even if a feature is half-completed. The killer is usually database stuff -- it's hard to get into the mindset of never (rarely) introducing breaking database schema changes.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

syphon posted:

Feature flags are great, but as the team/product gets larger they can turn into their own nightmare. For example, if you have 20 different devs contributing to the same product and they all put their code behind feature flags, that's 20^2 permutations of the app that should be tested (each feature should be tested against every possible permutation of every OTHER feature flag). What if another team has to roll back their changes or turn their feature off? Are you sure your code works reliable with features A B and C turned on but features X Y and Z turned off?

I rarely see a team of 20 devs all working on totally isolated features. It's more commonly 20 devs working on 1 or 2 Big New Features (that will reasonably span multiple sprints and thus be good candidates for feature flagging), and 3-5 little features/bugfixes that aren't big enough to warrant a flag.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

bgreman posted:

Do you work at my office? We've been revamping our CI/CD scheme over the last six months, migrating from many long-lived "feature" branches to one trunk branch + feature flags, and the pushback has been incredible, particularly with database stuff.

I work in many offices, helping people figure out how to do this stuff better. :)

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Mandator posted:

However I recently just set up Microsoft Release Manager for a TFS/GIT SC setup for a fairly large company if anyone has any questions about that. I think it's a pretty neat setup and I can't poke any holes in it. I'd love for you guys to poke holes in it though.

I've been working with the Release Management stuff a ton for the past 18 months. If you're using the agent-based model, stop right now and start considering how you can transition it to PowerShell or DSC scripts ("vNext" releases). The agent/fine-grained workflow model is being totally abandoned in TFS 2015 Update 1 in favor of a new release system that's closely modeled after the new build system (in fact, it's the exact same task system -- a build action can be a release action and vice versa). The idea is that you'll have your deployment scripts be in DSC/PowerShell/Chef/Puppet/Octopus/whatever and use the release tooling in TFS to orchestrate and manage releases, but not deployment. The release tooling will not help you deploy your software at all, there will be no built-in tasks for "set up a website" or anything like that. If you want to set up a website, write a DSC script, source control it, and invoke it as a release task.

The ALM Rangers are kicking off a project to create migration guidance and tooling next week, but it's going to be a shitshow for the existing users. I'm donating a bunch of code to the project because I foresaw this problem a while back and wrote a bunch of proof of concept code for doing migrations knowing we'd need it someday.

New Yorp New Yorp fucked around with this message at 01:02 on Jul 16, 2015

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Mandator posted:

I could have swore that agent based releases were not being phased out in 2015 and there was going to be a 2015 RM client that still supported agent based releases. I even read this from somewhere I trusted when I was doing my research on DSC/Agents. Gosh loving dang it. Why are they removing a feature that works perfectly fine?

However we only have around five of our enterprise projects CI'd at the moment using agent based releases so the switch should be relatively trivial. I've already extended the default functionality with PowerShell scripts so I'm not too worried about just going back to writing my own scripts for deployment.

Still, drat, thanks for the heads up man.

They're not being phased out, they're just going into maintenance mode. Think Silverlight or LINQ to SQL. They still exist, they work, you can use them, but they're not getting updates and there are newer technologies that you're supposed to use instead.

There's going to be a 2015 client/server/agent, with the minor aforementioned improvements. The new release system is entering private preview right now in VSO (I have access right now but haven't had much time to play with it and can't really comment on it beyond what's already public information). Once it drops for real (this fall/winter, last I heard was TFS 2015 Update 1 timeframe for on-prem, earlier for VSO), I would expect the client/server to work for another two or three years before they officially deprecate it for TFS2018 or whatever. That's my guess, that's nothing official from Microsoft or anything.

The reason is because of Microsoft's new direction in terms of cross-platform/cross-technology. They bought the existing software from another company to get something out there immediately so their intent to enter that area was well-known throughout the industry. They then transferred the original team they acquired to other projects and started working on their own release implementation that more closely hewed to their vision. You'll note that the "vNext" DSC stuff entered the picture pretty rapidly -- that was the direction they wanted to go in all along. The acquired technology was built on and for Windows and .NET, 100%. The client uses Windows Workflow and Windows executables and PowerShell scripts, which really doesn't translate to another platform, especially not with the shift toward everything being web-based.

The granular component/tool system worked okay for simple scenarios, but it didn't scale very well to very complex applications, and some aspects (rollbacks are implemented in a retarded, backwards way. The security model is awful. A lot of the built in tools are not idempotent and fail in weird ways) were broken at such a fundamental level as to render them useless. I did some implementations at Fortune 500 companies and really big insurance/financial institutions and the problems become very pronounced at that scale.

You can still achieve everything available in the agent task model with PowerShell/DSC scripts, it just requires more up-front effort. The rangers DSC resource kit helps fill some gaps, although not all of them. If the DSC ecosystem becomes more robust and discoverable, life will get better. I really didn't like working with Chef, but I will admit that Chef has an awesome community where cookbooks for every conceivable common scenario are already available. DSC needs to get to the same point.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
Microsoft's Release Management 2015 came out today, and I know some folks in this thread are using the 2013 edition.

I just upgraded my company's internal sandbox instance and it is totally hosed and nonfunctional. I think they're so focused on their total rewrite that they didn't put a lot of energy into testing this one, and it shows big-time. Obviously I have a sample size of 1 and your upgrade might work flawlessly, but I wanted to put the warning out. Of course, if you upgrade it and don't do a database backup first, you're dumb. So do that.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

StabbinHobo posted:

Has anyone built a CI/CD pipeline for a Unity 3d app? I've built lamp and jvm pipelines but never a win/c# one, I Just need a good blog post that walks through the options at the different steps.

What type of application is this? I assume from hearing "Unity" that it's a desktop application. Continuous integration is easy: You build it. You run code analysis. You run unit tests. However, you can't really do "continuous" delivery of desktop applications, except maybe to QA lab environments for running a suite of system/UI tests. When you're dealing with desktop applications, the best you can do is publish an installer or something like a ClickOnce (ugh) package.

In any case, this is all pretty easy stuff in the Microsoft world these days... they've been putting a lot of effort into making it discoverable and comprehensible over the past few years.

What are you currently using for source control? What branching strategy are you using?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

wins32767 posted:

What a wonderful, productive comment. I'm certainly illuminated by it.

DevOps is a culture shift toward fostering communication between developers and operations. That's it.

For some reason, people call developer roles with a focus on automation of operations tasks "devops".

I gave up fighting against the name a while ago.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

wins32767 posted:

Thanks, that's helpful. Our CTO and I are doing the operations now, so there is no meaningful daylight between operations and development, nor do we want there to be. What we do want is someone who has an interest in automating and managing our infrastructure as well as keeping an eye on our design and architectural choices with an eye towards making sure we're making decisions that result in infrastructure that isn't a nightmare to support in a couple years. It doesn't feel like an OG sys admin position (since we want them involved in design and architecture discussions) as well as automating the hell out of our infrastructure. If that's not DevOps, what would you call it?

It's a developer position. All the developers should be contributing. You want to hire another developer, and some development tasks are infrastructure automation tasks.

A devops role is just saying "let's totally automate the incredibly critical stuff and make sure the knowledge of that automation is understood and maintained by as few people as possible"

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

wins32767 posted:

I'm struggling with the "all developers should be contributing" piece. I'm uncomfortable with junior developers being involved in operations on a production environment. It's asking a lot of them to master a language and framework, much less all of the other pieces of infrastructure that support a production environment. Dealing with production systems also requires a degree of judgement that takes a while (and dealing with multiple self inflicted problems) to develop. I can see the value of involving them in writing infrastructure automation. Being able to actually execute changes in production, especially in a highly regulated domain, feels different to me. Maybe I'm too old school.

They're not operating on production environments. They're writing automation scripts that will be well-tested and well-understood in lower environments before they ever touch production. You should, at the very least, have a staging environment that is 100% the same as production.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

wins32767 posted:

Right, and the script writing is the part I'm fine with. But we need someone who is capable of getting on production environments that are having issues, figuring out what's wrong, and fixing the problem. That's the managing infrastructure piece I referred to.

That's where an operations team comes in that actually has access to those machines. That's where the devops culture is in play. Instead of spending 3 days filling out request tickets and dealing with administrative red tape, the developer says "hey mr. ops guy, I need to get onto this box for a few minutes" and they work together to solve the problem. Except that should rarely happen, because nothing should be changing in your infrastructure that isn't source controlled, reviewed, and being promoted through several lower stages first for testing.

wins32767 posted:

The other thing about a systems/infrastructure specific job that's helpful from a management perspective is that it's not someone that the business folks are going to push to be working on features rather than infrastructure. Another developer could easily be pulled into feature work that has revenue figures attached "just for this sprint".

It's up to the business to prioritize what they want delivered. If they want you to work on things that generate revenue and are willing to have you not working on infrastructure tasks to get it, you either shrug and do what they want or convince them otherwise.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Vulture Culture posted:

TeamCity is a great build server but nothing is very good at handling the deploy end of CD without a ton of duct tape and glue. That said, I've almost never run into weird operational problems with it, unlike Jenkins.

That's why the deployment piece is being foisted off onto configuration management systems for the most part. Overextending a build system to do deployments sucks. Plus builds pushing bits encourages building per environment instead of promoting changes from one environment to the next.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Boz0r posted:

After further examination of the problem, we actually just want to a build server that can do tests on the build before the code can be sent to the version control server. And just have a button on the build server that spits out the newest version, that we'll manually deploy.

So you're looking for (in centralized VC terms) a gated build, or in DVCS terms a pull request + build policy?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Boz0r posted:

I don't know what it's called, I just want to be able to click a button in IntelliJ that sends the code to TeamCity or whatever. That builds it, runs all the tests and verifies policies and, if it's supergreen, passes the code along to SVN.

EDIT: I think JetBrains calls it a delayed commit.

Yeah, TFS/VSTeam Services has something called a gated checkin that accomplishes that for their CVCS. Their new build system can build from Subversion, but they haven't implemented gated checkin yet.

For Git, they have pull requests + branch policies, where it will reject the pull request automatically if a build fails.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Illegal Move posted:

I've been using self-hosted Bamboo (+ JIRA) for a year for my personal projects. The starter license is not that expensive, but the 10 build jobs limit has been very annoying. I was wondering, before I renew my license, is there something better that I could use in the same price range or for free?

My situation is basically just around 5-10 active projects with private repos (most of them only have one or two committers) - most of them only get built and deployed <10 times per month. I would love any alternatives to be self-hosted as well, and integration with an issue tracker (doesn't have to be JIRA) would be amazing. Is anybody in a similar boat? What are you guys using?

Team Foundation Server Express. On-premise installation, free for up to 5 people. Full work tracking / source control / build.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Illegal Move posted:

Thanks, that sounds great, but from what I can tell, it only runs on windows? Sadly, I don't have any windows boxes.
From what I know, ssh from windows is a huge pain, so I'm assuming that even if I set up a virtual machine for this, actual deployments to my (linux) servers would not be easy to set up?

Correct, the actual server piece runs on Windows. The build and release infrastructure is cross-platform, however -- the build/release agent is node.js.

It's worth looking at Team Services. Same restrictions (free for up to 5 people) but you don't have to worry about the infrastructure side. Build is still free assuming you set up an on-prem agent.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

poemdexter posted:


The goal is to allow QA to test multiple things independently if needed so that developers aren't waiting to push code because current QA environment is being used. We currently just have DEV/QA/PROD environments and builds get pushed around to the environments as needed but we're trying to migrate to docker since infrastructure team has drunk the koolaid. I'm just a developer, but do devops a lot for our team since we're sorta in control of our own destiny in terms of build/deploy and I'm the only one with any sort of experience.

It sounds like you're not continuously integrating. You shouldn't need multiple environments to QA multiple features.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

poemdexter posted:

What do you mean? Basically when new code gets checked in onto the release branch, a build kicks off and the artifact gets deployed onto the QA environment overriding the previous version. What I'm trying to do is handle the case where we need 2 different versions running at the same time that can be accessed separately.

It comes back to the question of why you want two different versions to begin with. If work is being continuously integrated, you (theoretically) only need to be testing a single version -- the version you're trying to get pushed out the door.

This is assuming a web app, of course. Standalone applications are a different ballgame.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Vanadium posted:

We have a devops team ... we're probably doing this wrong.

Having a devops team is doing it wrong by definition.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

EssOEss posted:

In VSTS/TFS, is there some way to tell it "Don't run my build definition multiple times in parallel"? I have some external dependencies that will break if I run a build definition twice concurrently. However, I cannot find any way to actually limit this.

The subject is very difficult to Google, as well, since everyone seems to *want* more parallelism, so every answer is about how to make it happen.

Assuming private/on-prem agents, not hosted: You can set a custom Demand on the build definition and choose one agent to assign a matching Capability, but then you're limited to it only ever running on that one agent, even if that agent is being used by other builds and there are idle agents.

Fundamentally though, your build process is totally broken if multiple builds can't be run in parallel. I'd focus on fixing that problem. What is it doing with "external dependencies" that causes a failure? What are those external dependencies?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

EssOEss posted:

Yeah, I do not want to limit it to one agent.

For the sake of simplicity, you can imagine my build process uploading http://example.com/latestversion.exe. If two happen in parallel, the last one finishing wins and there's no way to know that the one that actually wrote it there was from the most recent checkin.

Serializing the builds would be the easiest way to eliminate such issues.

Use a Publish Artifacts step. Builds already natively have the ability to deal with publishing outputs so they can be available downstream.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
Me, 3 months ago: If you share unversioned deployment scripts across dozens of similar applications, eventually someone will make a breaking change and you'll get abrupt deployment failures.
Client: That will never happen

Them, 3 minutes ago: Someone made a breaking change to our deployment scripts and now releases are failing left and right!

Sometimes I hate being right.

Tomorrow: Implement versioning of their deployment scripts.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Newf posted:

Hi all. I assume this is the place for Docker questions. I've never used it before, so could people confirm / deny my impression of how it works / what it does? I'm skimming docs as I write this...

Given the following docker compose file on a Windows 10 machine:

What happens when I run 'docker-compose'?

Does it check the local machine for installs of postgres / redis / whatever? Does it pull down docker-friendly versions of these programs from the web (eg, http://hub.docker.com/_/postgres/ or http://store.docker.com/images/postgres/), and then cache these images locally for future installs?

Is there a way other than 'docker pull X' to make images locally available? My dev machine is a desktop with limited bandwidth, so it'd be handy to be able to download an image with a laptop at the library and then install it at home.

I see that the ./directories are pointing to directories in the repo, but what about the ~/.m2 and ~/.lein directories? Some linuxy thing I shouldn't worry about?

When you run docker-compose, nothing happens by default. What it sounds like you want to do is run docker-compose build.

Each of those "services" is pointing to a different docker image or dockerfile. A docker image is a container someone else built and published to a docker repository for others to use. In this case, the docker compose file has one image referenced: redis. The rest of the services are pointing to dockerfiles. Dockerfiles start with another image as a baseline, then layer on some commands or files or whatever to customize it.

If you were to run docker-compose pull, it would only download redis. The rest of the services have to be built from their dockerfiles first, the first step of which is downloading the necessary images.

Once an image is downloaded, it's cached for reuse. So basically do a quick build and you should be good to go.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

beuges posted:

So is Docker going to run Windows Server in a VM off Windows 10, and then run my containers in the VM?

No. Windows containers use Hyper-V as a hosting mechanism. Hyper-V actually treats even the host OS as a VM, albeit a very special VM.

beuges posted:


On my target machine, if the Windows Server version matches the docker base image OS version, will it run the containers directly off the underlying OS via the docker engine, or will it still create a Windows Server VM on my Windows Server machine regardless?

Stop thinking in terms of VMs. Containers aren't VMs, they are isolation layers. Windows containers run on the Hyper-V hypervisor to get access to system resources (CPU, memory, disk, etc). The "base image" is more of a set of basic capabilities than it is a full OS. This is why containers start in a few seconds instead of a minute or two -- starting a container doesn't involve booting up a full kernel, it just hooks into the already-running kernel. This is, of course, a massive simplification.

beuges posted:

If it can create VMs when the OS that the container is configured for doesn't match the underlying OS, can I run containers for Linux and Windows on the same box or is it limited to just one OS type per docker engine? Related, could I run containers for Windows Server Core and Windows Server Nano side by side on the same box, even though they are different base images?

Windows can run Windows containers. Linux can run Linux containers. Windows can also run Linux containers, but not at the same time as Windows containers.

In the case of Linux containers running on Windows, it actually does use a Linux VM to host the containers.

You can run as many different containers from different base images as you want, as long as the OS "flavor" is the same -- Windows or Linux.

FWIW, my experience with containers for Windows hasn't been great so far.

New Yorp New Yorp fucked around with this message at 16:50 on Feb 4, 2018

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

beuges posted:

Sure, but my understanding/experience of Hyper-V so far has been a means to run VMs, hence my confusion. Also, I was trying to work out how it would handle presenting the Server Core base image to the container when it was actually running on Windows 10, but I guess since they basically share the same kernel for the most part, that makes it a lot easier.
This does make things clearer for me though, thanks!

No problem! I've been on a Windows containers kick lately, trying to containerize a C# build environment. It's been unpleasant.

Linux containers work great, though.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

The NPC posted:

Starting down a similar path. I just finished The DevOps Handbook and highly recommend it. Gonna grab The Phoenix Project next. I was looking for Windows specific books before the holidays, but it looks like the few Windows specific books are going to be published in the coming months. Get familiar with Powershell if you aren't already as well as some Linux environment.

Whether you're on Linux or Windows, Docker underpins everything and Kubernetes seems to coming out on top for orchestration, AKS being managed Kubernetes.

You can skip the Phoenix Project if you're interested in technical details. It's mostly about Agile project management and does not go into any technical depth at all.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

The Fool posted:

Am I shooting myself in the foot if my first foray into CI/CD is VSTS? It's been super easy to set up and I haven't run into anything I wanted to do that I haven't been able to do yet, but since it doesn't seem to have a lot of community uptake, I can't help but think my time would be spent better with other tools.

Nope. It's an awesome platform and the basic tenents of continuous integration/delivery are basically tool agnostic.

[Full disclosure: I work for a Microsoft partner and do a lot of work in VSTS]

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Docjowles posted:

Well the idea is that it goes from one Jenkins I am responsible for to a bunch of Jenkinses individual teams are responsible for. We provide a platform and then the teams are delegated access to do what they need on it. But I take it I'm doing something very wrong here so am open to suggestions. I'm trying to do the neighborly DevOps thing here.

We get an disproportionate number of tickets requesting changes to Jenkins, upgrades, new plugins, new nodes. Everyone wants their change now. Yet if it's down for 10 seconds HipChat starts blowing up with "hey is Jenkins down for anyone else?!? Are Jerbs aren't running" comments. I want to get out of the business of managing Jenkins. Unfortunately it's also critical to the business and a shitton of jobs have built up in there over the years, so just switching to something better isn't possible overnight.

How do you all deal with this? Features of the paid Cloudbees version? Schedule a weekly maintenance window and tell people "tough poo poo, wait til Wednesday nights, and at that time the thing will be restarted so don't schedule or run stuff then"? Some other incredibly obvious thing I am missing?

I have a customer that has a similar problem with VSTS build. One central "ops" group, but lots of individual teams that all have different build requirements and a constantly shifting sea of crap that needs to be installed for their builds to work. Some of which break builds for other groups. Great fun.

Something I've been experimenting with for them is containerizing the agent and build environment. Let them be responsible for maintaining their own build stuff, then it's just a matter of giving them the means of running their containers. The problem I've been having is that Windows containers are still kind of, uh, lovely. And that this customer is totally incompetent and I'm not sure they'd be capable of understanding or maintaining a containerized solution, but that's not a technology problem.

I don't see why something similar couldn't be applied to Jenkins.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Extremely Penetrated posted:

Can I please get some advice from those who have done this before? My org wants to get started with being able to host Windows containers, as well as support some CI for a handful of devs each doing their own thing. Some are on TFS and others on an old version of GitLab, but nobody here has any CI/CD experience. We're 100% on-prem, no butt stuff. There's a need to keep things as simple as possible so that I'm not creating a nightmare for the rest of the Ops team.

My current plan is to do a couple Docker Swarms with Traefik for ingress, and then move all the devs to an upgraded version of GitLab for image repositories and CI jobs. I'd like to make them a sample pipeline to use as a reference, and then make them responsible for their own crap. I'm not sure yet if I should do a build environment or have them build on their workstations and upload to the repository. Does this approach make sense?

I don't have a clear idea of our dev's typical workflow, but they mostly make little .NET webapps with databases on an existing SQL cluster. They manually update UAT/prod by copying files over. Is there anything in my proposed plan that would be a no-go for normal dev work? What should I be asking them or looking for?

Erwin is 100% correct.

One of the major things I do professionally every single day is help teams implement continuous delivery pipelines. If you're not already on Docker, you are putting the cart before the horse in a big way. There's such a thing as "concept overload". You need to make incremental improvements to the existing process over a period of time, otherwise everyone will be unable to maintain, use, or troubleshoot the solution you deliver... except you. The less buy-in you have from the rest of the team and the more foreign it is, the more likely you are to be met with hostility and nay-saying. And in that case, god help you if your solution has a problem.

Also be aware that Windows containers are garbage, I have yet to be able to successfully containerize anything other than trivial, contrived "Hello World" applications with Windows containers.

[edit] Also be aware that unless it's a major priority to implement good test automation practices as part of all of this, the net result of your effort is going to be accelerating the rate at which the team can push bugs into production. I'm making an assumption about their level of testing maturity (nonexistent to low) based on their deployment practices (stone-age), which could be wrong.

But basically doing this right is a long-term project that's a big team effort and requires significant changes to how people do their jobs day-to-day. The devops mantra is "people, process, tools", not "tools, tools, tools".

New Yorp New Yorp fucked around with this message at 18:49 on Jun 7, 2018

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Scikar posted:

Windows containers are totally a thing and it's where Nano server ended up (it's now a container-only OS). They just don't really solve anything. The MS container images for things like the Cosmos DB emulator are built off Server Core by default, so they clock in at 1GB for the OS layer. If you are using .NET Framework that's your only option. If you want to use Nano server (and of course you do because 1GB for your OS layer is nuts) then you have to use .NET Core. But .NET Core is cross platform anyway, so at that point you can just use Linux containers, and why wouldn't you because otherwise you're cutting yourself off from 99% of the Docker ecosystem for zero benefit (or else having to run a separate set of Docker hosts).

I haven't really thought about it on a larger scale but I suspect it's less effort to set up a pipeline for .NET Core apps running on Linux containers and then gradually port projects from Framework to Core when you update them, than it is to take your existing full fat Framework apps and get them to play nicely in Server Core containers.

Yeah, here's an example: You can't install an MSI package in Nano. You can in Server Core, but that doesn't mean that every (or even 'most' or 'many' MSIs) will install successfully.

If you have a CRUD web app that has literally no OS dependencies other than IIS and the .NET framework (and any associated assemblies that your application builds or deploys), there's a pretty good chance it will work in a Windows container. Anything else? Haha, good luck.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

The Fool posted:

VSTS is cool and good if on-prem is not a requirement.

And can be installed on-prem if it is, although of course it only runs on Windows/MS SQL. The build/release agent is cross-platform, though.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Cancelbot posted:

Have any of you done a sort of "terms and conditions" for things like cloud adoption? Our internal teams are pushing for more autonomy and as such are wanting to get their own AWS accounts and manage their infrastructure, which in theory sounds great. But when they need to VPC peer into another teams account or connect down to our datacentre something has to be in place to ensure they're not going hog wild with spend, not create blatant security risks etc.

I'm trying to avoid a heavily prescriptive top-down approach to policy as that slows everybody down, but management want to be seen to have a handle on things, or at least make sense of it all. I've started work on a set of tools that descend from our root account and ensure simple things are covered; do teams have a budget, are resources tagged, etc. but not sure where to go from here in terms of making this all fit together cohesively.

I work in Azureland but I'm sure analogous things exist in AWS.

1) Policies are in place to restrict some things (can't create resource types X,Y,Z, VMs can't be in the "super expensive" class, etc). There's a well-defined, relatively red-tape-free process to say "Hey, I need an exception".
2) All changes are made via ARM templates that are stored in source control and are applied via a continuous delivery pipeline. All changes go through PRs and someone from the corporate cloud group is involved in the PR. Dev/test subscriptions exist to test ARM templates and/or tweak things through the portal with impunity.
3) Some things (generally, networking-related stuff like VPN connections and virtual networks for VMs) are controlled by corporate. If your VM needs to be able to talk to X, here's the vnet you connect it to. If you want to set up an elaborate weirdo internal network that doesn't interact with anything else, go nuts, do whatever you want.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

22 Eargesplitten posted:

I’m going to stop making GBS threads up the Working in IT thread with Docker stuff so I can poo poo it up with other topics. I’ve been doing some beginner tutorials at work, but I want to set something up at home. My desktop can sometimes be inconvenient to work from by virtue of being loving gigantic and stuck in one place. I want to set up a MEAN stack CRUD application, and it seems like being able to VPN into it from my laptop would be good. Here’s what I’m thinking I’ll need:

Container running VPN software of my choice.

Container running Mongodb.

Container running Express.

Container running Angular 2.

All of this on a Linux VM.

What am I missing?

I don't understand what problem you're trying to solve.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

22 Eargesplitten posted:

I’m a dumbass with no experience in web dev and am trying to figure this out as I go.

It's concept overload. Trying to learn 9 different new things at once is guaranteed to end in failure. Choose one or two of those new things you're unfamiliar with and learn about them in isolation. You don't have to become an expert, just comfortable.

If you don't know anything about web dev, combining "not knowing anything about web dev" with "totally unfamiliar development environment and deployment toolchain" is a terrible idea.

Adbot
ADBOT LOVES YOU

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

22 Eargesplitten posted:

I was also under the mistaken impression that Docker only worked on Linux, but there’s a version for Windows so I don’t know if I was reading it wrong or just had old information.

Not only does Docker work on Windows, but it can run Linux containers.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply