Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

beuges posted:

Thanks for both these suggestions! I think I'll install PRTG for now, and play around with Prometheus until I have worked out how to configure it properly, and then decide which to stick with.

FWIW, I use InfluxDB+telegraf+grafana which works the same way the other person described Prometheus, but I find InfluxDB queries easier to write.

Adbot
ADBOT LOVES YOU

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Has anybody used AppVeyor with their provided database services? I'm going crazy trying to figure out why I can't connect to them:

Turns out it's postgresql, not postgres :downs:

NihilCredo fucked around with this message at 01:08 on Nov 21, 2017

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe
Lol I hate that, wait till you have to find out it’s actually postgresql-9.4 on that particular installation

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
I was hoping to manage my persistent EBS volumes with terraform with the aws_ebs_volume and aws_volume_attachment resources. In my head it was simple. Give it a new AMI and then terraform apply will take care of detaching the volume from the old instance, destroy the old instance, create the new instance, and attach the EBS volume. In practice however, it doesn't seem to be that easy, kept running into errors about being unable to detach the EBS volume from the instance. Looks like many others run into this as well: https://github.com/hashicorp/terraform/issues/2957

What's a good way of handling this use case?

I get that terrraform shouldn't be concerning itself with what is going on inside the VM, whether or not it's safe to detach the volume, etc, etc. But if terraform just cleanly shuts down the existing instance before detaching the volume it doesn't have to think about that other poo poo right? What am I missing here?

edit: Sounds like maybe I can use skip_destroy: https://serverfault.com/a/834180/280309

fletcher fucked around with this message at 00:53 on Nov 28, 2017

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

I contribute to an opensource project on github. I'm not an official maintainer so I have my own fork that I submit PRs from.

This project uses travis-ci to automatically build and run tests whenever a PR is submitted. Travis outputs a console log of the build/test job and it tells which tests pass/fail, but the actual details about test failures is in a generated html report.

The project maintainers have configured travis to upload this report to a server, but it doesn't work when a PR comes from a fork because it needs a secure key to upload and:

travis docs posted:

Encrypted environment variables are not available to pull requests from forks due to the security risk of exposing such information to unknown code.

So is there any solution to this that doesn't necessitate each fork configuring their own travis-ci and upload server just to view a html report?

aunt jenkins
Jan 12, 2001

You need to set up a version of the test-running task that outputs the results to stdout. Travis is designed to capture and store that output (in plain or, if you're :coal:, ANSI text). Sending HTML files around is madness.

You didn't specify what language or test runner you're using but most of them have an option for plaintext output, and you can use the .travis.yml to specify precisely what commands to run.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

aunt jemima posted:

You need to set up a version of the test-running task that outputs the results to stdout. Travis is designed to capture and store that output (in plain or, if you're :coal:, ANSI text). Sending HTML files around is madness.

You didn't specify what language or test runner you're using but most of them have an option for plaintext output, and you can use the .travis.yml to specify precisely what commands to run.
Language is C++. Test runner is ctest.
The issue is that in this case the test data is visual, so in order to debug effectively, it helps to be able to see the images. The html report is a single file, with images embedded in it using base64 data:uri (this can make them relatively large for an html fie, on the order of 1MB or more depending on number of image comparison test failures)
Here is an actual example: http://files.openscad.org/tests/travis-2990_report.html

Is it really such an unimaginable use-case to support any form of test report(output file) beyond a console log?

peepsalot fucked around with this message at 22:20 on Dec 1, 2017

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

peepsalot posted:

Language is C++. Test runner is ctest.
The issue is that in this case the test data is visual, so in order to debug effectively, it helps to be able to see the images. The html report is a single file, with images embedded in it using base64 data:uri (this can make them relatively large for an html fie, on the order of 1MB or more depending on number of image comparison test failures)
Here is an actual example: http://files.openscad.org/tests/travis-2990_report.html

Is it really such an unimaginable use-case to support any form of test report(output file) beyond a console log?

I've not touched travis CI before, but can't you have another job happen after the PR test job completes that just takes files from the first job and uploads them then? That way the PR testing is completely separate from the upload step?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

poemdexter posted:

I've not touched travis CI before, but can't you have another job happen after the PR test job completes that just takes files from the first job and uploads them then? That way the PR testing is completely separate from the upload step?
I'm only familiar with travis through this project, so I'm not 100% sure, but the this "other job" would still need a secret key, and the problem appears to be that the jobs are defined in .travis.yml which is part of the github repo, so any fork has full control over travis by being able to modify this file in any commit.

edit: The key would be set in an environment variable, configurable via travis web interface in the maintainer's travis settings. These env vars are normally fully accessible to the .travis.yml job, but if PR is from a fork then these are all null'd out afaict. So if travis didn't null out those env vars, a hypothetical rude dude could write their travis.yml to echo the keys to twitter or upload some 0day warez to the server or whatever.

It seems like there could/should be some way to have a protected job that is somehow not modifiable by forks to do such a thing, but not sure how that would be implemented.

peepsalot fucked around with this message at 17:25 on Dec 2, 2017

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

peepsalot posted:

I'm only familiar with travis through this project, so I'm not 100% sure, but the this "other job" would still need a secret key, and the problem appears to be that the jobs are defined in .travis.yml which is part of the github repo, so any fork has full control over travis by being able to modify this file in any commit.

edit: The key would be set in an environment variable, configurable via travis web interface in the maintainer's travis settings. These env vars are normally fully accessible to the .travis.yml job, but if PR is from a fork then these are all null'd out afaict. So if travis didn't null out those env vars, a hypothetical rude dude could write their travis.yml to echo the keys to twitter or upload some 0day warez to the server or whatever.

It seems like there could/should be some way to have a protected job that is somehow not modifiable by forks to do such a thing, but not sure how that would be implemented.

Yeah I'm not sure how it would work with those constraints. I guess the best bet would be to just spit out the html as text in the test logs or whatever is accessible to end users and then you'd just manually copy/pasta it into an html file you can open in chrome. It's pretty much the same solution recommended by aunt jemima.

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
Hi all. I assume this is the place for Docker questions. I've never used it before, so could people confirm / deny my impression of how it works / what it does? I'm skimming docs as I write this...

Given the following docker compose file on a Windows 10 machine:

code:
version: "3"
services:
 postgres:
   build: postgres
   ports:
    - "5432:5432"
 keycloak:
   build: keycloak
   ports:
    - "8080:8080"
 backend:
   build:
     context: ./backend
     dockerfile: Dockerfile-dev
   volumes:
      - ./backend:/app
      - ~/.m2:/home/akvo/.m2
      - ~/.lein:/home/akvo/.lein
      - ./postgres/provision:/pg-certs
   links:
      - keycloak:auth.lumen.local
   ports:
      - "47480:47480"
      - "3000:3000"
   environment:
     - HOST_UID
     - HOST_GID
 client:
   build:
     context: ./client
     dockerfile: Dockerfile-dev
   volumes:
      - ./client:/lumen
   ports:
      - "3030:3030"
   environment:
     - HOST_UID
     - HOST_GID
 redis:
   image: redis:3.2.9
 windshaft:
   build: windshaft
   environment:
     - NODE_ENV=development
     - PGSSLROOTCERT=/pg-certs/server.crt
     - LUMEN_ENCRYPTION_KEY=supersecret
   volumes:
      - ./windshaft/config/dev:/config
- ./postgres/provision:/pg-certs
What happens when I run 'docker-compose'?

Does it check the local machine for installs of postgres / redis / whatever? Does it pull down docker-friendly versions of these programs from the web (eg, http://hub.docker.com/_/postgres/ or http://store.docker.com/images/postgres/), and then cache these images locally for future installs?

Is there a way other than 'docker pull X' to make images locally available? My dev machine is a desktop with limited bandwidth, so it'd be handy to be able to download an image with a laptop at the library and then install it at home.

I see that the ./directories are pointing to directories in the repo, but what about the ~/.m2 and ~/.lein directories? Some linuxy thing I shouldn't worry about?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Newf posted:

Hi all. I assume this is the place for Docker questions. I've never used it before, so could people confirm / deny my impression of how it works / what it does? I'm skimming docs as I write this...

Given the following docker compose file on a Windows 10 machine:

What happens when I run 'docker-compose'?

Does it check the local machine for installs of postgres / redis / whatever? Does it pull down docker-friendly versions of these programs from the web (eg, http://hub.docker.com/_/postgres/ or http://store.docker.com/images/postgres/), and then cache these images locally for future installs?

Is there a way other than 'docker pull X' to make images locally available? My dev machine is a desktop with limited bandwidth, so it'd be handy to be able to download an image with a laptop at the library and then install it at home.

I see that the ./directories are pointing to directories in the repo, but what about the ~/.m2 and ~/.lein directories? Some linuxy thing I shouldn't worry about?

When you run docker-compose, nothing happens by default. What it sounds like you want to do is run docker-compose build.

Each of those "services" is pointing to a different docker image or dockerfile. A docker image is a container someone else built and published to a docker repository for others to use. In this case, the docker compose file has one image referenced: redis. The rest of the services are pointing to dockerfiles. Dockerfiles start with another image as a baseline, then layer on some commands or files or whatever to customize it.

If you were to run docker-compose pull, it would only download redis. The rest of the services have to be built from their dockerfiles first, the first step of which is downloading the necessary images.

Once an image is downloaded, it's cached for reuse. So basically do a quick build and you should be good to go.

DISCO KING
Oct 30, 2012

STILL
TRYING
TOO
HARD
Hey everybody, I think I'm done loving around with ansible/lxd and want to learn puppet/docker for some decently marketable skills. Anybody got a good video tutorial or site? I've been working my way through this: https://www.example42.com/tutorials/PuppetTutorial/ It seems pretty good so far, but is there anything with more directory infrastructure examples or command line execution demonstrations? Thanks.

SeaborneClink
Aug 27, 2010

MAWP... MAWP!

Napoleon Bonaparty posted:

Hey everybody, I think I'm done loving around with ansible/lxd and want to learn puppet/docker for some decently marketable skills. Anybody got a good video tutorial or site? I've been working my way through this: https://www.example42.com/tutorials/PuppetTutorial/ It seems pretty good so far, but is there anything with more directory infrastructure examples or command line execution demonstrations? Thanks.

If you're looking for marketable skills don't learn Puppet.

nikki ashton
Jun 6, 2016

by Nyc_Tattoo

SeaborneClink posted:

If you're looking for marketable skills don't learn Puppet.

Why is that? And what would you suggest instead?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
There's still a big market for Puppet, but CM work is basically for legacy deployments nowadays, and you're guaranteed to be beaten out by H-1B candidates at 40 cents on the dollar.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Look for companies that are deploying on Nomad, Rancher, ECS, or preferably Kubernetes. Otherwise, I hope you’ve got a strong specialty in something besides ops like machine learning or Big Data BS. Even if the place is running on bandwagon technologies and it goes flop, you’ll have something that good companies want that skill set, too. Companies that haven’t been able to get their services deployed into containers by now are likely going to face some serious friction trying to quickly iterate safely and are at risk of dying from lagging their market.

If a place needs to do some stateful deployments, I hope you’re working on stuff like Consul, Kafka, Etcd, Dynomite, or some larger databases that you can’t just do A/B deploys in 2 minutes on a whim. Heck, even Elasticsearch is becoming old school at a point now.

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

necrobobsledder posted:

Look for companies that are deploying on Nomad, Rancher, ECS, or preferably Kubernetes. Otherwise, I hope you’ve got a strong specialty in something besides ops like machine learning or Big Data BS. Even if the place is running on bandwagon technologies and it goes flop, you’ll have something that good companies want that skill set, too. Companies that haven’t been able to get their services deployed into containers by now are likely going to face some serious friction trying to quickly iterate safely and are at risk of dying from lagging their market.

If a place needs to do some stateful deployments, I hope you’re working on stuff like Consul, Kafka, Etcd, Dynomite, or some larger databases that you can’t just do A/B deploys in 2 minutes on a whim. Heck, even Elasticsearch is becoming old school at a point now.

This a joke post, right? Or wait, no, you’re 16 years old?

B-Nasty
May 25, 2005

Eggnogium posted:

This a joke post, right? Or wait, no, you’re 16 years old?

If it was a joke, it was a perfect representation of a comment at Hacker News. Though, it needed more references to San Francisco, not seeing why anyone would want a car, and Soylent.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
Meh even if it’s satirical it is some of what I think about when looking at new jobs. Places that are open to new tech most likely aren’t going to be scared to try new tech which is interesting and they tend to pay much better then other than places.

Pollyanna
Mar 5, 2005

Milk's on them.


Pay at startups is directly proportional to what they can raise from investors, VCs, and stakeholders, so buzzword bingo and hype plays a massive role. I don’t know much about pay at established companies and large corporations, but it tends to be much more tempered in reality.

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Pollyanna posted:

Pay at startups is directly proportional to what they can raise from investors, VCs, and stakeholders, so buzzword bingo and hype plays a massive role. I don’t know much about pay at established companies and large corporations, but it tends to be much more tempered in reality.

I make 50% more at a F500 then at my last startup. And make 100% more then what we pay our outsourced internal sysadmins.

They want devops too and will pay for it.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Pollyanna posted:

Pay at startups is directly proportional to what they can raise from investors, VCs, and stakeholders, so buzzword bingo and hype plays a massive role. I don’t know much about pay at established companies and large corporations, but it tends to be much more tempered in reality.

this is the exact opposite of reality unless you include equity which may not be worth anything as pay at startups

DISCO KING
Oct 30, 2012

STILL
TRYING
TOO
HARD
So there isn't a good way to learn puppet, this thing everybody's been asking if I know? Okay cool.

EDIT:

necrobobsledder posted:

Look for companies that are deploying on Nomad, Rancher, ECS, or preferably Kubernetes. Otherwise, I hope you’ve got a strong specialty in something besides ops like machine learning or Big Data BS. Even if the place is running on bandwagon technologies and it goes flop, you’ll have something that good companies want that skill set, too. Companies that haven’t been able to get their services deployed into containers by now are likely going to face some serious friction trying to quickly iterate safely and are at risk of dying from lagging their market.

If a place needs to do some stateful deployments, I hope you’re working on stuff like Consul, Kafka, Etcd, Dynomite, or some larger databases that you can’t just do A/B deploys in 2 minutes on a whim. Heck, even Elasticsearch is becoming old school at a point now.

I'll definitely be checking these out, but legacy deployment is where everybody's at right now in silicon valley. Everybody's getting in on the whole thing and puppet's the main game as far as I can tell because it's a finished product and not for a better reason. And by everybody I mean it looks like everybody from gerber baby products to ski manufacturers are trying to get in on puppet.

DISCO KING fucked around with this message at 17:29 on Dec 11, 2017

Pollyanna
Mar 5, 2005

Milk's on them.


I bet they’re asking you if you know Puppet because some previous engineer wrote something using it and now they’re hamstrung by this system nobody else at the company knows.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Puppet is fine; many orgs are still early in CM lifecycle, and it's got what enterprises crave. Not every industry or every job is at the front of the latest container orchestration flavor-of-the month.

It's like the Java of CM; that some H1bs know it doesn't mean it's not worth knowing.

Tigren
Oct 3, 2003
I work at one of the Big 5 and we use Puppet and Ansible all over the place. We also have tons of bare metal. Just because startups focus on the new hotness doesn't mean the tried and true is out of fashion.

DISCO KING
Oct 30, 2012

STILL
TRYING
TOO
HARD
Plus these engines are all based on known quantities like python, yaml, ruby, etc. It's not like learning puppet is going to require a year course online, I was just going to touch up my Ruby, find some .pp file guides and get cracking on laptop VMs at the local coffee shop.

So does anybody have a useful "getting started" they want to pass around? Something that lays out best practices for directories, file formats, variables, what have you?

Warbird
May 23, 2012

America's Favorite Dumbass

Napoleon Bonaparty posted:

So there isn't a good way to learn puppet, this thing everybody's been asking if I know? Okay cool.

Puppet the company has some free learning VM or something like that. I’ve heard good things, but haven’t used them myself. LinuxAcademy has a course on it to prep for a cert. it’s comprehensive, but rough enough that I’d recommend getting a subscription until they revamp it or there are other subjects on the platform that interest you.

My admittedly short experience in F500 DevOps has been about 80/20 Puppet to Ansible use. Couldn’t hurt to know either, but Puppet seems to be more popular, especially to corps just moving into formalized DevOps.


edit: Also chocolatey is the tits. Thanks and god bless.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Tigren posted:

I work at one of the Big 5 and we use Puppet and Ansible all over the place. We also have tons of bare metal. Just because startups focus on the new hotness doesn't mean the tried and true is out of fashion.
As part of my latest round of hiring, I spent a few weeks doing non-stop technical sourcing. Because I was hiring for remote positions, I reviewed thousands of posted resumes from different major cities, which included automation specialists from every big enterprise you could imagine. What I found surprised me: what you said is less true than I thought going in. Sure, okay, there's lots of bare metal and lots of legacy systems using old-school CM. But I also found that for companies like Walmart and Coca-Cola, that kind of application delivery is the exception, not the norm, for new development, and everything in greenfield has gone all in on container orchestration because the benefits in UAT and iterative feature delivery are so obvious and easily recognized in large enterprises with dozens of teams touching a project. These businesses have been itching for a decade for a technology like Kubernetes that they can run in-house with less complexity and more material gains than OpenStack, and many are running very far ahead of the curve.

Vulture Culture fucked around with this message at 19:18 on Dec 11, 2017

DISCO KING
Oct 30, 2012

STILL
TRYING
TOO
HARD
I just found that Puppet VM, thanks. It also looks like Kubernetes has a web browser tutorial.

I've already got a decent background in Ansible by dicking around with some people I know; it seems capable but it's obviously not done, which seems to be why everybody's hiring for Puppet right now. I'm 100% willing to believe the market will dump either of these in a heartbeat if something better comes along. I also know the real thing everybody's into right now is Docker/EC2/AWS and the handling mechanism is the least important part. Automation just seems cool and good and the direction everything's heading so I thought I'd check it all out.

Continuous Integration is more about things like container cluster management though, right? What are people using for those? Is that more like what Kubernetes is about?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Napoleon Bonaparty posted:

I just found that Puppet VM, thanks. It also looks like Kubernetes has a web browser tutorial.

I've already got a decent background in Ansible by dicking around with some people I know; it seems capable but it's obviously not done, which seems to be why everybody's hiring for Puppet right now. I'm 100% willing to believe the market will dump either of these in a heartbeat if something better comes along. I also know the real thing everybody's into right now is Docker/EC2/AWS and the handling mechanism is the least important part. Automation just seems cool and good and the direction everything's heading so I thought I'd check it all out.

Continuous Integration is more about things like container cluster management though, right? What are people using for those? Is that more like what Kubernetes is about?
With all due respect, you come across like you're way more focused on the technology buzzwords than the problems they actually solve. Someone approaching automation tooling without the domain knowledge to actually do those things is at best unproductive and at worst really dangerous. Take a breather from things that support the software engineering process and learn the software engineering process.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Half tongue in cheek, half serious. Legacy deployment gigs are pretty lovely in my experience partly because most deployment and architectures with stateful systems are half-assed in planning and roll-out that are awful to take down for maintenance if deployed lazily, not because there’s something wrong with a stateful system operationally (stateful systems without decent / tested / proven HA is something else). Nearly 9 years ago I was rolling out non-prod Hadoop clusters configured with early Puppet versions that was ugly but sure beat hand-configuring everything - where’s the progress since then as a community besides approaches that require stateless systems / architecture in half your systems? It’s disappointing that as a nobody with mediocre employer history and somehow Silicon Valley companies are still using the same software as these awful laggards when it comes to ops tooling despite how much better they’re supposed to be at software and systems. This isn’t the same as everyone using Git or Linux either.

Also, I literally have talked to engineers at Coca Cola and many other non-tech companies locally and only old, outdated stacks and applications that are not strategic are still doing classic Puppet and Chef and such. There’s a surprising amount of companies that have already deployed Kubernetes into prod and these are places I thought were far, far behind the curve. Even the middling compensation companies have managed to move to stateless service components and everything. Puppet and Chef based gigs are few and far between now and pay in the middle and are below $90k frequently (not very competitive that is).

Mr. Crow
May 22, 2008

Snap City mayor for life

Vulture Culture posted:

With all due respect, you come across like you're way more focused on the technology buzzwords than the problems they actually solve. Someone approaching automation tooling without the domain knowledge to actually do those things is at best unproductive and at worst really dangerous. Take a breather from things that support the software engineering process and learn the software engineering process.

LMBO

I hope that was a joke pay bonaparty otherwise see above. That sounds a lot like a recruiter who doesn't have any idea what they're talking about; trying to con some people into a job way below cost.

Mr. Crow
May 22, 2008

Snap City mayor for life
Also y'all act like there isn't a huge market and industry of companies and services running private clouds and guess what you need to use to set that up.

I will agree that puppet is hot garbage but that doesn't mean that everyone is or should be chomping at the bit to throw all their services onto AWS or GCP, or that it's even intelligent to do so (see the several plain text DoD classified info leaks on AWS).

That's a solution looking for a problem.

FamDav
Mar 29, 2008

Mr. Crow posted:

(see the several plain text DoD classified info leaks on AWS).

i don't think the lack of intelligence on behalf of the contractor willfully making classified data public in an s3 bucket is a good example of whether or not aws is a good choice for hosting.

Mr. Crow
May 22, 2008

Snap City mayor for life
Ugh users being idiots has been the driving force behind restrictive IT policy since forever and is a definitive reason why companies wouldn't want to let their ip anywhere near public servers.

FamDav
Mar 29, 2008

Mr. Crow posted:

Ugh users being idiots has been the driving force behind restrictive IT policy since forever and is a definitive reason why companies wouldn't want to let their ip anywhere near public servers.

aws gives you practically all the tools necessary to have a restrictive IT policy if you want. for the above example, you can restrict the ability to create a publicly readable bucket organization-wide in about 10 lines of json.

there's plenty of valid reasons why you would prefer not to use public cloud, but the notion that its less secure or difficult/impossible to implement all kinds of IT policies is suspect.

FamDav fucked around with this message at 04:41 on Dec 12, 2017

Sedro
Dec 31, 2008
I've used puppet for a few things and it's always been a disaster. I'll find a package which does exactly what I want, great! Hours later I'm debugging compatibility issues in an package 3 dependencies deep. Oh, there's a PR to fix it. Opened a year ago.

Adbot
ADBOT LOVES YOU

EssOEss
Oct 23, 2006
128-bit approved
Is there anything better? I find that most of "system setup" tooling and automation is poo poo - I can certainly say so about Packer and Vagrant, which I have had the displeasure of using. Yet I know of nothing better.


On other lines, I recently became aware that code signing now requires hardware key storage: https://support.globalsign.com/customer/portal/articles/2705869-minimum-requirements-for-code-signing

That completely destroys my cloud-based build processes. Is it really impossible to sign code in the cloud now?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply