Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
what build system
intern manually running tasks by hand
some gigantic proprietary system that only runs on windows
random link to github project with no commit in 3 years
goku
View Results
 
  • Post
  • Reply
Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe
I'm actually pretty OK with bamboo tbh

I ran a Jenkins server for a while to manage my team's builds, it was OK but very fiddly and hard to get right

Eventually I migrated to our company-wide Bamboo server. Overall it works pretty good, plus IT janitors it and that's a plus in my book

Use it with exclusively remote build agents though - using local agents is setting yourself up for a whole new world of bullshit

op, does your dept keep the Bamboo server up-to-date? Atlassian develops it pretty actively

Poopernickel fucked around with this message at 20:09 on Dec 11, 2019

Adbot
ADBOT LOVES YOU

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe

rotor posted:

honestly i feel like just rolling your own in this instance might not be the worst idea, but obvy i don't know your requirements.

please for the love of god

do not do this

those that come after you will be guaranteed to curse your name as they wade through your mess of shell scripts and cron jobs

Unless your goal is to be the CI janitor until you quit, use a system that your QA team can help manage, and your IT team can support

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe

The Management posted:

I just use make, op. it runs everywhere. my build server is a script that watches git and runs make.

make is good

custom build servers are bad, don't do them

not even once

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe
for me, build nirvana is:

- bamboo turns on my AWS build machine
- bamboo tells it to run a build
- the actual build is as easy as 'make all', make is good for wrapping other complex junk
- bamboo archives the results
- there's a manual step that promotes and publishes the builds

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe

Zlodo posted:

yeah if you want to have only one build config per source tree and also intermediary build files littered across your entire source tree
its not the seventies anymore grandpa

Make is good for wrapping other things, because literally its whole job is to run shell scripts.

Use CMake, autotools, or whatever - they're all good too. But it's nice having a top-level Makefile that you can use to kick off your cmake commands and junk.

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe
hosted CI = no

don't do it, not even once

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe

yep, it's ugly for sure

all in all though it's not that bad once you embrace the void - it's widely used because it's not terrible

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe

Cybernetic Vermin posted:

i mean, obviously you have to accept some ugliness to solve the incredibly important problem of "check that library x is installed except please don't do it as part of compiling where it has to be looked up anyway but rather as an entirely separate and even more complex step for some reason"

What about software that can use one of a few different libs as the backend? Or software that enables optional functionality if it has a library available at compile-time?

What's your preferred way to express "when this program runs, it should get config files from $sysconfdir instead of /etc", or "for this one build on my dev machine, install everything into $home/bin instead of /opt"?

What's the easiest way to make sure that your build works when the compiler tries to specify its own --sysrootdir because it's a cross-compiler?

Do you like your build to run for 30 minutes and then fail to link halfway through because you didn't know you needed some library or other?

you can definitely work around all of this stuff yourself manually in your own makefile, but why would you? Can you be sure you'll catch all the corner cases? Why not use a tool that's designed to deal with exactly these kinds of problems? And deals with them in a way that everybody is used to using?

Autotools and cmake both fill a useful niche.

CMake is a little newer, and a little easier in some cases. It's built around a bespoke macro language that isn't used anywhere else, just like autoconf. Autoconf has some better built-in features for checking if a library is available, and whether you can compile and link against it. It can even check to see if a symbol is #defined in a header, and configure the build differently if needed.

CMake has a module that emulates autotools' directory-handling. Autotools needs more dependencies at source-packaging time, and fewer to configure and compile.

Both let you specify standardized options and configure your build, both are super widely used, and both are terrible and have a steep learning curve. Probably a necessary evil.

Poopernickel fucked around with this message at 20:16 on Jan 8, 2020

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe

Storysmith posted:

hi did you just call something written in m4 template that generates bash script snippets not terrible

good point it's definitely terrible, and m4 sucks a lot

I'm just saying that it offers a lot of value too, especially for distro package maintainers - and the value outweighs the hassle and general bullshit

also for 99% of autoconf work, you don't need to know anything about m4 at all. Autoconf just becomes a yet another domain-specific macro language, no different than CMake except that you can use inline shell scripts too

Poopernickel fucked around with this message at 20:55 on Jan 8, 2020

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe

Zlodo posted:

autocrap

:smug:

Poopernickel fucked around with this message at 01:23 on Jan 9, 2020

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe
docker in your CI path is bad, don't run your builds in a docker

actually probably just don't use docker in general, for anything

remember: a docker is somebody who engages in docking

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe

it's pretty easy to have a dockerfile and think that your builds are reproducible, except:

1. you're downloading a linux image from a server that probably won't exist in a couple of years
2. you're using apt-get in your dockerfile, which basically guarantees you can never reproduce the configuration
3. good chance you're also using an even less reproducible package-manager too, like pip or npm
4. if your build isn't actually tied to a particular linux, then congratulations it is now
5. I guess you can archive the image manually, but in that case why gently caress around with docker?

also lets none of us forget that docker is a for-profit company owned by sharky VCs - probably shouldn't assume dockerhub is profitable or will exist in 5 years, and probably shouldn't assume anything on dockerhub will be kept private - anything that hits their servers will eventually be sold to an adtech company as ~metadata~

people will forget about docker, then they'll make the program into some kind of lovely freemium thing to try and squeeze blood from a turnip, then it'll become irrelevant

and you'll still be stuck janitoring your dockers

maybe docker's better used as a production environment, idk - but I'm guessing probably not

Poopernickel fucked around with this message at 05:56 on Jan 15, 2020

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe

Helianthus Annuus posted:

but its cool to use a docker image to provide an environment for CI, because THAT actually does make your builds more repeatable

the problem shows up when you go to update that image, and the Dockerfile refers to an apt repo that doesn't exist anymore. then you're back to janitoring linux again

true on all counts

at work, we used to have a Linux CI build that ran inside a Docker image - over time, everything in the image got increasingly obsolete and out-of-sync to the point where its build results were useless. Plus the virtual root environment meant that one of our devs decided to hard-code a bunch of things to dump into /opt at build-time, and nobody noticed since "the build still works". After all who would ever dream of doing something other than building inside the docker image??

we couldn't add anything new to the image because it referred to external repo sources that don't exist any more - so we just had this black-box docker image that was getting more and more broken, and effectively couldn't be reproduced or modified.

I burned it down and moved all dependencies to things that could be installed on the build agent with apt-get and it's been working flawlessly (and has survived several OS upgrades).

so I guess that's a story on how to use docker to ruin a future maintainer's sanity?

Poopernickel fucked around with this message at 07:09 on Jan 15, 2020

Adbot
ADBOT LOVES YOU

Poopernickel
Oct 28, 2005

electricity bad
Fun Shoe
good point

with docker, you pay all that bullshit twice - once in the docker image, and once for the machine that runs the docker image

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply