|
beuges posted:Thanks for both these suggestions! I think I'll install PRTG for now, and play around with Prometheus until I have worked out how to configure it properly, and then decide which to stick with. FWIW, I use InfluxDB+telegraf+grafana which works the same way the other person described Prometheus, but I find InfluxDB queries easier to write.
|
# ? Nov 17, 2017 17:43 |
|
|
# ? Jun 3, 2024 22:08 |
|
Turns out it's postgresql, not postgres NihilCredo fucked around with this message at 01:08 on Nov 21, 2017 |
# ? Nov 20, 2017 15:47 |
|
Lol I hate that, wait till you have to find out it’s actually postgresql-9.4 on that particular installation
|
# ? Nov 21, 2017 17:41 |
I was hoping to manage my persistent EBS volumes with terraform with the aws_ebs_volume and aws_volume_attachment resources. In my head it was simple. Give it a new AMI and then terraform apply will take care of detaching the volume from the old instance, destroy the old instance, create the new instance, and attach the EBS volume. In practice however, it doesn't seem to be that easy, kept running into errors about being unable to detach the EBS volume from the instance. Looks like many others run into this as well: https://github.com/hashicorp/terraform/issues/2957 What's a good way of handling this use case? I get that terrraform shouldn't be concerning itself with what is going on inside the VM, whether or not it's safe to detach the volume, etc, etc. But if terraform just cleanly shuts down the existing instance before detaching the volume it doesn't have to think about that other poo poo right? What am I missing here? edit: Sounds like maybe I can use skip_destroy: https://serverfault.com/a/834180/280309 fletcher fucked around with this message at 00:53 on Nov 28, 2017 |
|
# ? Nov 27, 2017 23:24 |
|
I contribute to an opensource project on github. I'm not an official maintainer so I have my own fork that I submit PRs from. This project uses travis-ci to automatically build and run tests whenever a PR is submitted. Travis outputs a console log of the build/test job and it tells which tests pass/fail, but the actual details about test failures is in a generated html report. The project maintainers have configured travis to upload this report to a server, but it doesn't work when a PR comes from a fork because it needs a secure key to upload and: travis docs posted:Encrypted environment variables are not available to pull requests from forks due to the security risk of exposing such information to unknown code. So is there any solution to this that doesn't necessitate each fork configuring their own travis-ci and upload server just to view a html report?
|
# ? Nov 30, 2017 22:50 |
|
You need to set up a version of the test-running task that outputs the results to stdout. Travis is designed to capture and store that output (in plain or, if you're , ANSI text). Sending HTML files around is madness. You didn't specify what language or test runner you're using but most of them have an option for plaintext output, and you can use the .travis.yml to specify precisely what commands to run.
|
# ? Dec 1, 2017 14:50 |
|
aunt jemima posted:You need to set up a version of the test-running task that outputs the results to stdout. Travis is designed to capture and store that output (in plain or, if you're , ANSI text). Sending HTML files around is madness. The issue is that in this case the test data is visual, so in order to debug effectively, it helps to be able to see the images. The html report is a single file, with images embedded in it using base64 data:uri (this can make them relatively large for an html fie, on the order of 1MB or more depending on number of image comparison test failures) Here is an actual example: http://files.openscad.org/tests/travis-2990_report.html Is it really such an unimaginable use-case to support any form of test report(output file) beyond a console log? peepsalot fucked around with this message at 22:20 on Dec 1, 2017 |
# ? Dec 1, 2017 22:12 |
|
peepsalot posted:Language is C++. Test runner is ctest. I've not touched travis CI before, but can't you have another job happen after the PR test job completes that just takes files from the first job and uploads them then? That way the PR testing is completely separate from the upload step?
|
# ? Dec 1, 2017 23:56 |
|
poemdexter posted:I've not touched travis CI before, but can't you have another job happen after the PR test job completes that just takes files from the first job and uploads them then? That way the PR testing is completely separate from the upload step? edit: The key would be set in an environment variable, configurable via travis web interface in the maintainer's travis settings. These env vars are normally fully accessible to the .travis.yml job, but if PR is from a fork then these are all null'd out afaict. So if travis didn't null out those env vars, a hypothetical rude dude could write their travis.yml to echo the keys to twitter or upload some 0day warez to the server or whatever. It seems like there could/should be some way to have a protected job that is somehow not modifiable by forks to do such a thing, but not sure how that would be implemented. peepsalot fucked around with this message at 17:25 on Dec 2, 2017 |
# ? Dec 2, 2017 16:27 |
|
peepsalot posted:I'm only familiar with travis through this project, so I'm not 100% sure, but the this "other job" would still need a secret key, and the problem appears to be that the jobs are defined in .travis.yml which is part of the github repo, so any fork has full control over travis by being able to modify this file in any commit. Yeah I'm not sure how it would work with those constraints. I guess the best bet would be to just spit out the html as text in the test logs or whatever is accessible to end users and then you'd just manually copy/pasta it into an html file you can open in chrome. It's pretty much the same solution recommended by aunt jemima.
|
# ? Dec 2, 2017 23:56 |
|
Hi all. I assume this is the place for Docker questions. I've never used it before, so could people confirm / deny my impression of how it works / what it does? I'm skimming docs as I write this... Given the following docker compose file on a Windows 10 machine: code:
Does it check the local machine for installs of postgres / redis / whatever? Does it pull down docker-friendly versions of these programs from the web (eg, http://hub.docker.com/_/postgres/ or http://store.docker.com/images/postgres/), and then cache these images locally for future installs? Is there a way other than 'docker pull X' to make images locally available? My dev machine is a desktop with limited bandwidth, so it'd be handy to be able to download an image with a laptop at the library and then install it at home. I see that the ./directories are pointing to directories in the repo, but what about the ~/.m2 and ~/.lein directories? Some linuxy thing I shouldn't worry about?
|
# ? Dec 3, 2017 18:36 |
|
Newf posted:Hi all. I assume this is the place for Docker questions. I've never used it before, so could people confirm / deny my impression of how it works / what it does? I'm skimming docs as I write this... When you run docker-compose, nothing happens by default. What it sounds like you want to do is run docker-compose build. Each of those "services" is pointing to a different docker image or dockerfile. A docker image is a container someone else built and published to a docker repository for others to use. In this case, the docker compose file has one image referenced: redis. The rest of the services are pointing to dockerfiles. Dockerfiles start with another image as a baseline, then layer on some commands or files or whatever to customize it. If you were to run docker-compose pull, it would only download redis. The rest of the services have to be built from their dockerfiles first, the first step of which is downloading the necessary images. Once an image is downloaded, it's cached for reuse. So basically do a quick build and you should be good to go.
|
# ? Dec 3, 2017 19:12 |
|
Hey everybody, I think I'm done loving around with ansible/lxd and want to learn puppet/docker for some decently marketable skills. Anybody got a good video tutorial or site? I've been working my way through this: https://www.example42.com/tutorials/PuppetTutorial/ It seems pretty good so far, but is there anything with more directory infrastructure examples or command line execution demonstrations? Thanks.
|
# ? Dec 10, 2017 22:54 |
|
Napoleon Bonaparty posted:Hey everybody, I think I'm done loving around with ansible/lxd and want to learn puppet/docker for some decently marketable skills. Anybody got a good video tutorial or site? I've been working my way through this: https://www.example42.com/tutorials/PuppetTutorial/ It seems pretty good so far, but is there anything with more directory infrastructure examples or command line execution demonstrations? Thanks. If you're looking for marketable skills don't learn Puppet.
|
# ? Dec 11, 2017 06:19 |
|
SeaborneClink posted:If you're looking for marketable skills don't learn Puppet. Why is that? And what would you suggest instead?
|
# ? Dec 11, 2017 06:24 |
|
There's still a big market for Puppet, but CM work is basically for legacy deployments nowadays, and you're guaranteed to be beaten out by H-1B candidates at 40 cents on the dollar.
|
# ? Dec 11, 2017 06:41 |
|
Look for companies that are deploying on Nomad, Rancher, ECS, or preferably Kubernetes. Otherwise, I hope you’ve got a strong specialty in something besides ops like machine learning or Big Data BS. Even if the place is running on bandwagon technologies and it goes flop, you’ll have something that good companies want that skill set, too. Companies that haven’t been able to get their services deployed into containers by now are likely going to face some serious friction trying to quickly iterate safely and are at risk of dying from lagging their market. If a place needs to do some stateful deployments, I hope you’re working on stuff like Consul, Kafka, Etcd, Dynomite, or some larger databases that you can’t just do A/B deploys in 2 minutes on a whim. Heck, even Elasticsearch is becoming old school at a point now.
|
# ? Dec 11, 2017 07:38 |
|
necrobobsledder posted:Look for companies that are deploying on Nomad, Rancher, ECS, or preferably Kubernetes. Otherwise, I hope you’ve got a strong specialty in something besides ops like machine learning or Big Data BS. Even if the place is running on bandwagon technologies and it goes flop, you’ll have something that good companies want that skill set, too. Companies that haven’t been able to get their services deployed into containers by now are likely going to face some serious friction trying to quickly iterate safely and are at risk of dying from lagging their market. This a joke post, right? Or wait, no, you’re 16 years old?
|
# ? Dec 11, 2017 08:00 |
|
Eggnogium posted:This a joke post, right? Or wait, no, you’re 16 years old? If it was a joke, it was a perfect representation of a comment at Hacker News. Though, it needed more references to San Francisco, not seeing why anyone would want a car, and Soylent.
|
# ? Dec 11, 2017 13:22 |
|
Meh even if it’s satirical it is some of what I think about when looking at new jobs. Places that are open to new tech most likely aren’t going to be scared to try new tech which is interesting and they tend to pay much better then other than places.
|
# ? Dec 11, 2017 14:38 |
|
Pay at startups is directly proportional to what they can raise from investors, VCs, and stakeholders, so buzzword bingo and hype plays a massive role. I don’t know much about pay at established companies and large corporations, but it tends to be much more tempered in reality.
|
# ? Dec 11, 2017 15:22 |
|
Pollyanna posted:Pay at startups is directly proportional to what they can raise from investors, VCs, and stakeholders, so buzzword bingo and hype plays a massive role. I don’t know much about pay at established companies and large corporations, but it tends to be much more tempered in reality. I make 50% more at a F500 then at my last startup. And make 100% more then what we pay our outsourced internal sysadmins. They want devops too and will pay for it.
|
# ? Dec 11, 2017 16:04 |
|
Pollyanna posted:Pay at startups is directly proportional to what they can raise from investors, VCs, and stakeholders, so buzzword bingo and hype plays a massive role. I don’t know much about pay at established companies and large corporations, but it tends to be much more tempered in reality. this is the exact opposite of reality unless you include equity which may not be worth anything as pay at startups
|
# ? Dec 11, 2017 16:10 |
|
So there isn't a good way to learn puppet, this thing everybody's been asking if I know? Okay cool. EDIT: necrobobsledder posted:Look for companies that are deploying on Nomad, Rancher, ECS, or preferably Kubernetes. Otherwise, I hope you’ve got a strong specialty in something besides ops like machine learning or Big Data BS. Even if the place is running on bandwagon technologies and it goes flop, you’ll have something that good companies want that skill set, too. Companies that haven’t been able to get their services deployed into containers by now are likely going to face some serious friction trying to quickly iterate safely and are at risk of dying from lagging their market. I'll definitely be checking these out, but legacy deployment is where everybody's at right now in silicon valley. Everybody's getting in on the whole thing and puppet's the main game as far as I can tell because it's a finished product and not for a better reason. And by everybody I mean it looks like everybody from gerber baby products to ski manufacturers are trying to get in on puppet. DISCO KING fucked around with this message at 17:29 on Dec 11, 2017 |
# ? Dec 11, 2017 17:09 |
|
I bet they’re asking you if you know Puppet because some previous engineer wrote something using it and now they’re hamstrung by this system nobody else at the company knows.
|
# ? Dec 11, 2017 17:16 |
|
Puppet is fine; many orgs are still early in CM lifecycle, and it's got what enterprises crave. Not every industry or every job is at the front of the latest container orchestration flavor-of-the month. It's like the Java of CM; that some H1bs know it doesn't mean it's not worth knowing.
|
# ? Dec 11, 2017 17:41 |
|
I work at one of the Big 5 and we use Puppet and Ansible all over the place. We also have tons of bare metal. Just because startups focus on the new hotness doesn't mean the tried and true is out of fashion.
|
# ? Dec 11, 2017 17:45 |
|
Plus these engines are all based on known quantities like python, yaml, ruby, etc. It's not like learning puppet is going to require a year course online, I was just going to touch up my Ruby, find some .pp file guides and get cracking on laptop VMs at the local coffee shop. So does anybody have a useful "getting started" they want to pass around? Something that lays out best practices for directories, file formats, variables, what have you?
|
# ? Dec 11, 2017 18:30 |
|
Napoleon Bonaparty posted:So there isn't a good way to learn puppet, this thing everybody's been asking if I know? Okay cool. Puppet the company has some free learning VM or something like that. I’ve heard good things, but haven’t used them myself. LinuxAcademy has a course on it to prep for a cert. it’s comprehensive, but rough enough that I’d recommend getting a subscription until they revamp it or there are other subjects on the platform that interest you. My admittedly short experience in F500 DevOps has been about 80/20 Puppet to Ansible use. Couldn’t hurt to know either, but Puppet seems to be more popular, especially to corps just moving into formalized DevOps. edit: Also chocolatey is the tits. Thanks and god bless.
|
# ? Dec 11, 2017 18:59 |
|
Tigren posted:I work at one of the Big 5 and we use Puppet and Ansible all over the place. We also have tons of bare metal. Just because startups focus on the new hotness doesn't mean the tried and true is out of fashion. Vulture Culture fucked around with this message at 19:18 on Dec 11, 2017 |
# ? Dec 11, 2017 19:16 |
|
I just found that Puppet VM, thanks. It also looks like Kubernetes has a web browser tutorial. I've already got a decent background in Ansible by dicking around with some people I know; it seems capable but it's obviously not done, which seems to be why everybody's hiring for Puppet right now. I'm 100% willing to believe the market will dump either of these in a heartbeat if something better comes along. I also know the real thing everybody's into right now is Docker/EC2/AWS and the handling mechanism is the least important part. Automation just seems cool and good and the direction everything's heading so I thought I'd check it all out. Continuous Integration is more about things like container cluster management though, right? What are people using for those? Is that more like what Kubernetes is about?
|
# ? Dec 11, 2017 20:25 |
|
Napoleon Bonaparty posted:I just found that Puppet VM, thanks. It also looks like Kubernetes has a web browser tutorial.
|
# ? Dec 11, 2017 21:17 |
|
Half tongue in cheek, half serious. Legacy deployment gigs are pretty lovely in my experience partly because most deployment and architectures with stateful systems are half-assed in planning and roll-out that are awful to take down for maintenance if deployed lazily, not because there’s something wrong with a stateful system operationally (stateful systems without decent / tested / proven HA is something else). Nearly 9 years ago I was rolling out non-prod Hadoop clusters configured with early Puppet versions that was ugly but sure beat hand-configuring everything - where’s the progress since then as a community besides approaches that require stateless systems / architecture in half your systems? It’s disappointing that as a nobody with mediocre employer history and somehow Silicon Valley companies are still using the same software as these awful laggards when it comes to ops tooling despite how much better they’re supposed to be at software and systems. This isn’t the same as everyone using Git or Linux either. Also, I literally have talked to engineers at Coca Cola and many other non-tech companies locally and only old, outdated stacks and applications that are not strategic are still doing classic Puppet and Chef and such. There’s a surprising amount of companies that have already deployed Kubernetes into prod and these are places I thought were far, far behind the curve. Even the middling compensation companies have managed to move to stateless service components and everything. Puppet and Chef based gigs are few and far between now and pay in the middle and are below $90k frequently (not very competitive that is).
|
# ? Dec 11, 2017 23:36 |
|
Vulture Culture posted:With all due respect, you come across like you're way more focused on the technology buzzwords than the problems they actually solve. Someone approaching automation tooling without the domain knowledge to actually do those things is at best unproductive and at worst really dangerous. Take a breather from things that support the software engineering process and learn the software engineering process. LMBO I hope that was a joke pay bonaparty otherwise see above. That sounds a lot like a recruiter who doesn't have any idea what they're talking about; trying to con some people into a job way below cost.
|
# ? Dec 12, 2017 01:27 |
|
Also y'all act like there isn't a huge market and industry of companies and services running private clouds and guess what you need to use to set that up. I will agree that puppet is hot garbage but that doesn't mean that everyone is or should be chomping at the bit to throw all their services onto AWS or GCP, or that it's even intelligent to do so (see the several plain text DoD classified info leaks on AWS). That's a solution looking for a problem.
|
# ? Dec 12, 2017 01:36 |
|
Mr. Crow posted:(see the several plain text DoD classified info leaks on AWS). i don't think the lack of intelligence on behalf of the contractor willfully making classified data public in an s3 bucket is a good example of whether or not aws is a good choice for hosting.
|
# ? Dec 12, 2017 02:05 |
|
Ugh users being idiots has been the driving force behind restrictive IT policy since forever and is a definitive reason why companies wouldn't want to let their ip anywhere near public servers.
|
# ? Dec 12, 2017 03:06 |
|
Mr. Crow posted:Ugh users being idiots has been the driving force behind restrictive IT policy since forever and is a definitive reason why companies wouldn't want to let their ip anywhere near public servers. aws gives you practically all the tools necessary to have a restrictive IT policy if you want. for the above example, you can restrict the ability to create a publicly readable bucket organization-wide in about 10 lines of json. there's plenty of valid reasons why you would prefer not to use public cloud, but the notion that its less secure or difficult/impossible to implement all kinds of IT policies is suspect. FamDav fucked around with this message at 04:41 on Dec 12, 2017 |
# ? Dec 12, 2017 04:39 |
|
I've used puppet for a few things and it's always been a disaster. I'll find a package which does exactly what I want, great! Hours later I'm debugging compatibility issues in an package 3 dependencies deep. Oh, there's a PR to fix it. Opened a year ago.
|
# ? Dec 12, 2017 05:45 |
|
|
# ? Jun 3, 2024 22:08 |
|
Is there anything better? I find that most of "system setup" tooling and automation is poo poo - I can certainly say so about Packer and Vagrant, which I have had the displeasure of using. Yet I know of nothing better. On other lines, I recently became aware that code signing now requires hardware key storage: https://support.globalsign.com/customer/portal/articles/2705869-minimum-requirements-for-code-signing That completely destroys my cloud-based build processes. Is it really impossible to sign code in the cloud now?
|
# ? Dec 12, 2017 06:53 |