Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vanadium
Jan 8, 2005

We have a devops team but we neglected to invite them to our meeting about idk, monitoring our services or w/e, we're probably doing this wrong.

I'm kind of in a position where I could in theory introduce and apply good devopsy practices to our stuff, but I'm not really confident enough to do much but go with the flow of the existing, possibly suboptimal approaches...

Adbot
ADBOT LOVES YOU

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Vanadium posted:

We have a devops team ... we're probably doing this wrong.

Having a devops team is doing it wrong by definition.

Vanadium
Jan 8, 2005

Yeah thats why i brought it up

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Pollyanna posted:

Ive always seen SRE stuff described as "be on Pagerduty and get frantic calls when alarms go off" while "doing devops" is Docker, CICD, and AWS. :shrug:

I've dealt with Docker before in a very limited capacity so I listed it in my resume, but apparently it's a whole field of study now. And drat near everything wants AWS experience now.

SRE work is a superset of software engineering that deals not only with CI/CD but also platform stability. It can definitely have PagerDuty work but a lot of it is looking at existing architecture and performance and helping development teams identify bottlenecks and issue areas and helping them reengineer with an eye for operational improvement.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There's a rainbow of different kinds of titles that seem to be oriented primarily around the axis of "how close to dealing with the toil of classic sysadmin-style operations work do you have to do day-to-day?" but all of them should be treating sshing into machines to fix random problems that show up like it's cancer.

Nobody ever has a title called "Agile engineer" or "Program Engineer" or an "Agile Group" but boy are there tons of jobs with "devops" in the name. They do seem to be universally from big, bureaucratic companies that are not respected by software professionals. Most other organizations use terms like "Infrastructure Engineer," SRE, and "Cloud Engineer" that are actually more generic-sounding and reflect that they are a type of software engineer moreso than a computer janitor that you throw your stuff farted out by Jenkins at and go back to a world where all you need to care about is if your code passes tests and builds in Jenkins and you might get a ticket later that your code is segfaulting in prod.

Pollyanna posted:

And drat near everything wants AWS experience now.
Mark my words, AWS is the Java of infrastructure.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

Mark my words, AWS is the Java of infrastructure.
If you see AWS predominantly as an infrastructure play, then hoo boy are you underestimating Amazon.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I see it as far more than just infrastructure, but 99%+ of the huge companies I've seen utilize it as a big part of their strategy will really never be able to think of them as more than IaaS. Most of these CTOs have never worked for a tech company ever in their entire career I've seen. That's a big deal.

I've worked with a handful of the top 10 largest AWS customers so far ($10MM+ / mo is the smallest) and they're treating it like datacenters and deny-by-default service offerings from a separate software vendor like MS, Oracle, etc. that must go through purchasing and so forth. It goes through review rightfully so but taking 1 year to approve Lambda and denying EFS is just sillypants. The deployment patterns I'm seeing still are lift and shifts taking 3+ years and rewrites that will take another 5+, so in 8 years they will be where most companies that use cloud somewhat right will have been 5 years ago while the technology gap between laggards and fast movers has grown larger.

Crazy thing is that datacenter management is so bad at the massive companies $120MM/year for a couple hundred applications is a massive bargain. When you burn $2Bn / year on datacenters for worse uptimes and worse performance than AWS even if you let instances sit for 2+ years of uptime you're just bad at tech full-stop.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS

necrobobsledder posted:

The deployment patterns I'm seeing still are lift and shifts taking 3+ years and rewrites that will take another 5+, so in 8 years they will be where most companies that use cloud somewhat right will have been 5 years ago while the technology gap between laggards and fast movers has grown larger.
this is simultaneously killing me and making me a ton of money. you can consult on these projects drat near in perpetuity, but all you've gone and done is spent 5 years making decent money off of dying/failing orgs.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Vulture Culture posted:

If you see AWS predominantly as an infrastructure play, then hoo boy are you underestimating Amazon.

I mostly use lambda nowadays, but I'd be interested in hearing what else you're talking about here as I don't really follow this space much.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

StabbinHobo posted:

you can consult on these projects drat near in perpetuity, but all you've gone and done is spent 5 years making decent money off of dying/failing orgs.
Trying to save those that are drowning can cause your own drowning even if you're experienced and trained. While I know plenty of people make great money (better than a staff engineer at a big tech company is my barometer) doing it, I'm not one of nor will ever be one of them for various reasons.

Thermopyle posted:

I mostly use lambda nowadays, but I'd be interested in hearing what else you're talking about here as I don't really follow this space much.
There's a ton of service offerings that fit the "80%+ of users' needs" featuresets on both AWS and GCP beyond "make me servers and networks." There's entire IoT device management / identity platforms, service desks, machine learning based analysis, etc. With such offerings all based around pay-as-you-go models, they can potentially create Salesforce-like businesses with every other thing they put out (not that they are, but iteration / MVP blah blah). The organization I'm at now is a victim in part due to AWS commoditizing what used to be "secret sauce" (admittedly, it really wasn't ever that hard IMO). And with so many companies being built on top of AWS now, it makes it super easy for Amazon to acquire companies and integrate them into the future.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

There's a ton of service offerings that fit the "80%+ of users' needs" featuresets on both AWS and GCP beyond "make me servers and networks." There's entire IoT device management / identity platforms, service desks, machine learning based analysis, etc. With such offerings all based around pay-as-you-go models, they can potentially create Salesforce-like businesses with every other thing they put out (not that they are, but iteration / MVP blah blah). The organization I'm at now is a victim in part due to AWS commoditizing what used to be "secret sauce" (admittedly, it really wasn't ever that hard IMO). And with so many companies being built on top of AWS now, it makes it super easy for Amazon to acquire companies and integrate them into the future.
And with so much tech running on AWS, Amazon profits from that SaaS growth regardless of whether they actually own the Salesforce-like business or not. That's where the real danger to traditional business models comes in: more competitors running more efficiently, stealing legacy businesses' need for servers away whether they move to cloud or not.

FamDav
Mar 29, 2008

Thermopyle posted:

I mostly use lambda nowadays, but I'd be interested in hearing what else you're talking about here as I don't really follow this space much.

lambda at in some sense is one of those things, but

* l4/l7 load balancers
* intensely scalable key-value stores
* exabyte-scale object storage
* managed mysql/postres/oracle + custom variants like aurora
* distributed tracing
* metrics, logs, alarms
* peta/exabyte-scale managed data warehouses
* several managed ML and "big" data systems
* build, deploy, and pipelines
* managed git
* CDN
* declarative infrastructure
* cloud windows desktops
* managed exchange, documents
* managed pub/sub
* managed queues, both in the form of SQS (lower overall throughput but with a very simple API for synchronizing consumers) and kinesis (effectively infinite throughput but requires more individual coordination)
* managed state machines
* managed work chat
* managed call center
* multi-account organizations
* a particularly thorough permissions system for all of this

and there's things ive missed, and things that have yet to be released

EDIT: to make this less of an appeal to list, i'll add that most of this stuff is free or effectively free on top of the cost of compute/storage/networking. and its also billed in fractional increments of fractional periods of time, so you really do pay for just what you use. the biggest downside of this (and a thing we need to focus on correcting) is that its so easy to forget about all the things you've started using and end up with a bill that doesnt accurately reflect what you actually used

FamDav fucked around with this message at 22:52 on Aug 6, 2017

JHVH-1
Jun 28, 2002
What are some ways/systems people are using to manage media (videos, audio, etc). Don't think I want to store them in git, even with LFS enabled. We are just putting them onto s3 or they get manually uploaded. I was thinking of building a central CDN that can have content shared by multiple sites and content managers can just link them directly, but it would be nice to have some kind of back end CMS to browse it and store it in S3. A plus if it can keep track of things like duplicates etc. Would be nice to just hand out a login and let people browse around and upload new content if needed without teaching them how S3 works (Though for a lot of people just telling them to use Cyberduck is enough).

Or if the media can be stored like packages somewhere and tied to the code builds, maybe that would work.

EDIT: This looks kinda cool but more of a self hosted s3 https://www.minio.io

JHVH-1 fucked around with this message at 17:47 on Aug 7, 2017

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
S3 and EFS are your primary options for large file storage. I wouldn't use EFS unless you are only temporarily storing them due to its steep cost compared to S3, but EFS is substantially faster in throughput (S3 has maxed out at 75 MBps / file for us with 16 concurrent multi-part download transfers running, EFS can go above that easily). It sounds like you'll need some form of a content management system if you're trying to do more than give a UI to technical people. Heck, the product I work on is specifically built for people having trouble managing large multimedia files and pushing them to distributors. There's expensive stuff like T3Media / Wazee built for large content providers like sports networks but that sounds like it's overkill. Really, anything more than Cyberduck or whatnot and you're starting to get into CMS territory anyway.

I'd setup an AWS lambda function to handle triggering of uploads and checksums, use EFS for temporary file storage to make checksums faster, run instances, use SQS for task queue, and setup a basic page hosted out of S3 for the UI, and store file checksums in S3. If it's not that high volume of users, your primary costs would be the instances for mounting the EFS CIFS / NFS endpoints and performing checksums. Only reason I'd go here is because Lambda functions that run a long time can get pricey and because transferring files out of S3 can be really slow on a single-threaded Lambda function (I don't think you are allowed to spawn like 16 threads to speed up S3 downloads as multi-part downloads).

EssOEss
Oct 23, 2006
128-bit approved
In VSTS/TFS, is there some way to tell it "Don't run my build definition multiple times in parallel"? I have some external dependencies that will break if I run a build definition twice concurrently. However, I cannot find any way to actually limit this.

The subject is very difficult to Google, as well, since everyone seems to *want* more parallelism, so every answer is about how to make it happen.

Warbird
May 23, 2012

America's Favorite Dumbass

Everyone please lodge your complaints with the nature of "DevOps" jobs after I get a job.

So Docker question. I can wrap my head around general containers just fine, but more "interactive" ones are escaping me. Is setting something up with the intention of having to pass commands regularly to the container's contents going against the whole point of the matter?

Specifically this. My limited understanding is that the image is intended for use in a CI setup. I imagine that if you're passing a limited set of commands on a given interval, then it's fine to have Jenkins/Chef/whatever fire off a given input when needed. Is that about right?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Using volume mountpoints in a container (read: persistent state) is accepted as long as you know how to manage them effectively. Most of the critics are talking about production situations when it comes to containerized applications and services.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

EssOEss posted:

In VSTS/TFS, is there some way to tell it "Don't run my build definition multiple times in parallel"? I have some external dependencies that will break if I run a build definition twice concurrently. However, I cannot find any way to actually limit this.

The subject is very difficult to Google, as well, since everyone seems to *want* more parallelism, so every answer is about how to make it happen.

Assuming private/on-prem agents, not hosted: You can set a custom Demand on the build definition and choose one agent to assign a matching Capability, but then you're limited to it only ever running on that one agent, even if that agent is being used by other builds and there are idle agents.

Fundamentally though, your build process is totally broken if multiple builds can't be run in parallel. I'd focus on fixing that problem. What is it doing with "external dependencies" that causes a failure? What are those external dependencies?

EssOEss
Oct 23, 2006
128-bit approved
Yeah, I do not want to limit it to one agent.

For the sake of simplicity, you can imagine my build process uploading http://example.com/latestversion.exe. If two happen in parallel, the last one finishing wins and there's no way to know that the one that actually wrote it there was from the most recent checkin.

Serializing the builds would be the easiest way to eliminate such issues.

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

EssOEss posted:

Yeah, I do not want to limit it to one agent.

For the sake of simplicity, you can imagine my build process uploading http://example.com/latestversion.exe. If two happen in parallel, the last one finishing wins and there's no way to know that the one that actually wrote it there was from the most recent checkin.

Serializing the builds would be the easiest way to eliminate such issues.

This seems like a weird thing you're doing anyway, but why not just have your build process emit versioned artifacts then have another job that marks them 'latest'? I know with TeamCity or Jenkins it should be pretty trivial to figure out which is more recent based on the source revision and name them appropriately.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

EssOEss posted:

Yeah, I do not want to limit it to one agent.

For the sake of simplicity, you can imagine my build process uploading http://example.com/latestversion.exe. If two happen in parallel, the last one finishing wins and there's no way to know that the one that actually wrote it there was from the most recent checkin.

Serializing the builds would be the easiest way to eliminate such issues.

Use a Publish Artifacts step. Builds already natively have the ability to deal with publishing outputs so they can be available downstream.

OWLS!
Sep 17, 2009

by LITERALLY AN ADMIN
Anybody going to Boston DevOps Days?

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

OWLS! posted:

Anybody going to Boston DevOps Days?

I'm on my way there right now if only the T would run a little faster.

Docjowles
Apr 9, 2009

Blinkz0rz posted:

I'm on my way there right now if only the T would run a little faster.

as a Boston resident, I have bad news for you~

I went last year but had a conflict this week unfortunately and can't make it. Hope there's some good sessions! I always enjoy DevOps Days events, been to Denver and Boston so far.

OWLS!
Sep 17, 2009

by LITERALLY AN ADMIN
the t, run well when you need to? Lol.

also oh boy goonops meet?

Docjowles posted:

I went last year but had a conflict this week unfortunately and can't make it.

Boo

OWLS! fucked around with this message at 14:22 on Sep 18, 2017

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Docjowles posted:

as a Boston resident, I have bad news for you~

I went last year but had a conflict this week unfortunately and can't make it. Hope there's some good sessions! I always enjoy DevOps Days events, been to Denver and Boston so far.

No trust me I'm intimately familiar with how bad the T is. I just usually avoid the orange line but welp

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Hi, relative Docker noob here. The Docker docs really strongly urge you to use (named) volumes as opposed to bind mounts.

However, they're really poor at persuading me that volumes are so obviously superior, in fact I may be paranoid but they really sound like a pushy vendor when they do so.

Here is what they say:

quote:

Volumes have several advantages over bind mounts:

Volumes are easier to back up or migrate than bind mounts.
I cannot see in which sense this is possibly true. The official docs for backing up and restoring volumes have been deleted (this is the link you find on this page, for example). Googling 'how to back up Docker volumes' appears to involve using linux images to mount the volumes, then backing up the contents of the volume using a temporary bind mount, i.e. literally adding extra steps over just backing up the bind mount in the first place.

quote:

You can manage volumes using Docker CLI commands or the Docker API.
You can manage bind mounts using every file management tool under the sun.

quote:

Volumes work on both Linux and Windows containers.
Ok, this one is fair. If you need to transfer or share data between Linux and Windows container, sure, volumes all the way.

quote:

Volumes can be more safely shared among multiple containers.
I can't figure out what they mean. Maybe they mean that the CLI arguments for reusing volumes are slightly harder to screw up, eg. by accidentally binding the same host path to two containers?

quote:

Volume drivers allow you to store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
Are there really more Docker volume drivers than, say, FUSE filesystems?

quote:

A new volume’s contents can be pre-populated by a container.
This one seems true as well, although (a) I don't see why they can't let the container pre-populate a new (nonexistent or empty) mount as well, and (b) I haven't yet run into any containers that rely on this.

So far I'm leaning strongly towards bind mount points as being both easier to manage and easier to protect, while only using volumes for convenience when putzing around with data I don't mind losing. The one actual argument I've found against them is that they may incur a performance loss, but this appears to only be significant in non-Linux hosts.

Am I wildly off base somewhere?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
The main thing with bind mounts is that they're fine for individual hosts, but manually-managed local-system fiddly-bits are an approach that doesn't scale well into a container orchestration system like Kubernetes, Swarm, or Nomad. It's also substantially harder to do them in a cross-platform way if you're trying to share build tooling with e.g. developers on Mac and Windows hosts, and removing that ability nukes a lot of the benefit that Docker provides in the first place. If you're just looking at Docker to do image-sharing and some namespace isolation to solve one very particular deployment problem, your approach is probably fine.

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

Has anyone ever managed to weld Docker and Maven together in Jenkins so that it's possible to take advantage of the Maven plugin while running everything inside a container?

Writing a Jenkinsfile that uses Docker? That's easy. Installing the Maven integration plugin and having it read the POM and figure everything out by itself? That's easy. Putting the two things together? :supaburn:

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
The solution to most problems involving Jenkins plugins is to stop using plugins for anything that doesn't inherently require a Jenkins plugin.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Plorkyeran posted:

The solution to most problems involving Jenkins plugins is to stop using plugins for anything that doesn't inherently require a Jenkins plugin.
True of basically any critical workflow component ever, but double for Jenkins.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Wait, Maven plugin AND Jenkins? Depending upon the Maven plugin used for Docker (some are substituting variables into Dockerfiles, others flat out generate them last I saw) you’re opening up Pandora’s box or signing up for cancer.

You may want to try to stick with a Jenkinsfile that will launch a Maven or Gradle target / goal and nothing more to contain the configuration drift problems that tend to happen with Jenkins installs.

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

necrobobsledder posted:

Wait, Maven plugin AND Jenkins? Depending upon the Maven plugin used for Docker (some are substituting variables into Dockerfiles, others flat out generate them last I saw) you’re opening up Pandora’s box or signing up for cancer.

You may want to try to stick with a Jenkinsfile that will launch a Maven or Gradle target / goal and nothing more to contain the configuration drift problems that tend to happen with Jenkins installs.

I decided the Maven plugin could eat poo poo and I'm just running shell commands inside the container, as Grod intended.

Also, why is the Jenkins documentation so sparse and why do the little strands that exist all suck? Is there a single coherent reference for the Jenkinsfile DSL anywhere, even though I already know the answer?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There’s a doc by Wilson Mar on the DSL that I found useful in the past. https://wilsonmar.github.io/jenkins-plugins/

I know people have had the most success with Jenkins by simply using shell scripts in the repo. It makes it tougher to override some tool paths, but I wrote up a list of shell variables that will be available from Jenkins and life was good. Only thing left is to figure out a way to let developers control the container used to build their project for matrix builds.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
Me, 3 months ago: If you share unversioned deployment scripts across dozens of similar applications, eventually someone will make a breaking change and you'll get abrupt deployment failures.
Client: That will never happen

Them, 3 minutes ago: Someone made a breaking change to our deployment scripts and now releases are failing left and right!

Sometimes I hate being right.

Tomorrow: Implement versioning of their deployment scripts.

mr_package
Jun 13, 2000
I have taken over a home-grown build system that's a mix of windows, mac, and linux machines. Is there any good container-style way of deploying/managing code to the build servers in a mixed environment like this? If I could bundle up the build system code with the xcode/visualstudio/java/whatever versions without having to manage VMs I'd be happy. But maybe VMs are the answer and I should use the VMware API to allocate a corresponding VM each time I need to run a build? The macs are standalone and not part of the VMWare cluster but I guess I could add them.

Right now I don't have any automation or configuration/orchestration stuff to manage the servers themselves, but on less than 10 machines it's not dire (just annoying). I really want to transition away from this though-- it would be awesome if I could dockerize xcode, buildscripts, and dependencies all together and worry less about the OS state. Is there an approach that's working for people?

Mr. Crow
May 22, 2008

Snap City mayor for life
On noob docker chat... the thing to realize is that the majority of their recommendations apply to running 'in the cloud' as that's where most of it's use is.

You can totally use it as a simple virtualized application environment on a single host though, just realize the intended market and all the guidance is going to be with the intention of multiple hosts in the cloud (see named volumes vs volume mounts).

We use it heavily in our CI process to simplify agent requirements, the only things agents need are vagrant and docker and everything else provides it's own build environment via one or both of those and the builders just basically 'docker run' everything.

It's nice when people need to pull down a project they're not working on too, don't need to figure out how to compile the dependencies or install them, can be up and running immediately.

EssOEss
Oct 23, 2006
128-bit approved

mr_package posted:

I have taken over a home-grown build system that's a mix of windows, mac, and linux machines. Is there any good container-style way of deploying/managing code to the build servers in a mixed environment like this?

Docker supports all those operating systems. Docker also supports multi-stage builds (so you first build an image that builds your app, out of which you build your app image). I have not actually used the latter due to having a fairly low-expectations build process but in theory, you can make transient containers that build your app container images with Docker.

For any sort of scripting, use PowerShell Core which is multiplatform - this enables your multiplatform apps to use the same setup scripts on all platforms. No need to make different scripts per OS, just maybe some "if ($IsLinux)" statements here and there.

For building the apps, I guess it depends. I am in a .NET shop, so it's either .NET Framework (build on Windows, deploy to any OS) or .NET Core (build on any OS, deploy to any OS). I generally keep my builds in Windows just for uniformity's sake, regardless of where I deploy to. However, you might not have that luxury if you are not .NET based.

It is not clear to me what exactly your scenario is. You talk about XCode but then also talk about Windows. Go into details.

mr_package
Jun 13, 2000
These servers build apps for each platform e.g. Linux is building Android, Mac building for OSX, etc. so each platform does still run its native tools. I was just wondering what people use to keep things under control in this type of environment. If I could have an "OSX Build Server xcode 8.3" container and spin it up on demand for example, rather than installing xcode8.3 on the mac build servers themselves-- if I could containerize the whole build toolchain, that just seems kind of ideal. But I don't think we're there yet especially in terms of OSX support. (Docker on Mac is still virtualized Linux at its core so I don't think it can run xcodebuild etc.) Maybe for this use case vmware is still the best option.

Adbot
ADBOT LOVES YOU

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

mr_package posted:

These servers build apps for each platform e.g. Linux is building Android, Mac building for OSX, etc. so each platform does still run its native tools. I was just wondering what people use to keep things under control in this type of environment. If I could have an "OSX Build Server xcode 8.3" container and spin it up on demand for example, rather than installing xcode8.3 on the mac build servers themselves-- if I could containerize the whole build toolchain, that just seems kind of ideal. But I don't think we're there yet especially in terms of OSX support. (Docker on Mac is still virtualized Linux at its core so I don't think it can run xcodebuild etc.) Maybe for this use case vmware is still the best option.

I had a similar build server catalogue at my last place and an IT department resistant to VMs, so I ended up using Saltstack to auto install all my build dependencies and it basically went okay.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply