|
Dreadrush posted:Hi I'm very new to the whole Docker thing and am trying to learn more about it. If you're not deploying node, it shouldn't be included in your images. Generate your static files outside docker and build them into the image. code:
|
# ? Nov 20, 2016 07:47 |
|
|
# ? May 15, 2024 03:56 |
|
Alternatively, use volumes and mount the content into the proxy server container.code:
|
# ? Nov 20, 2016 19:00 |
|
Dreadrush posted:Hi I'm very new to the whole Docker thing and am trying to learn more about it. Don't deploy nginx for static files unless you can really help it. You'll find it requires more work to maintain than you want and you'll have to do some work to scale it for traffic even if it is as simple as putting containers behind Mezos or Kubernetes. Instead, use something like S3 or CloudFront (or w/e your cloud provider has) for static stuff and set CORS accordingly.
|
# ? Nov 20, 2016 19:05 |
|
Thanks for your advice. I read a blog post saying that Docker can be used for compiling your application too and not just solely concentrating on what is deployed, but I guess this is not the right way to do it.
|
# ? Nov 20, 2016 21:20 |
|
Dreadrush posted:Thanks for your advice. I read a blog post saying that Docker can be used for compiling your application too and not just solely concentrating on what is deployed, but I guess this is not the right way to do it. I used that as a half joke at the otime k this week. Person complaining that he hardest problem with open source was building from source with all the assumptions that aren't documented. And i pitched docker as the one true configure/autoconf.
|
# ? Nov 20, 2016 21:42 |
|
docker is fine for compiling, but you should have your container produce whatever artifact you need and then use that artifact in separate containers (or just run it directly). you shouldn't try to compose your build container with your run container. that's the worst of all worlds
|
# ? Nov 20, 2016 22:06 |
|
I've got an application that's developed locally under docker-compose, where the application container is kept separate from the database container. I want to try using Bitbucket Pipelines as an automated regression testing system, whereby branches automatically have their specs run. However, I don't see anything out there on integrating an application that uses docker-compose with Pipelines out there, and I don't know enough about Docker to figure out what I need to do to get it to work, aside from the fact that having an application confined to a single container is apparently not what Pipelines expects. Anyone here familiar enough with Docker, Bitbucket, and Pipelines to help me figure out what I need to do?
|
# ? Nov 21, 2016 16:18 |
|
Jenkins has a new blog post up for GC tuning on large instances. I"m going to put it on our test server and throw some load at it. GC tuning blog post Anyone here going to re:invent? I'm looking forward to doing GameDay this year.
|
# ? Nov 23, 2016 16:07 |
|
Oh hey that owns. Thanks for sharing.
|
# ? Nov 23, 2016 16:20 |
|
I'm struggling to automate creating VMs on an ESXi, any help would be appreciated. I'm currently using packer (and new to it) to connect to the server and create a VM. I'm then trying to create a .box from it for vagrant and this is where it's failing, how do I tell vagrant to look on the server for the image and/or to expiry it back to my local machine? Will this custom .box even work and allow vagrant to up into the ESXi server (hoping it somehow baked the credentials into the box but I'm pretty sure this is not going to work in general)? The biggest problem seems to be not having the ESXi server hooked into vcenter since all of the plugins for vagrant/Ansible that work with ESXi expect to be using vcenter, but getting that done is out of my hands (though something in working on).
|
# ? Nov 30, 2016 22:18 |
|
Mr. Crow posted:I'm struggling to automate creating VMs on an ESXi, any help would be appreciated. If you use the Vagrant post-processor in your packer template it should automatically pull it down and do the work I'd think? Can you share your template? theperminator fucked around with this message at 12:25 on Dec 12, 2016 |
# ? Dec 12, 2016 11:29 |
|
theperminator posted:If you use the Vagrant post-processor in your packer template it should automatically pull it down and do the work I'd think? Can you share your template? Ended up getting vCenter installed so it's a non-issue. I'm also 90% sure what I had in my head wouldn't have worked anyway, not without writing a custom plugin. I can post what I had for posterity if anyone is curious but I wouldn't recommend that approach.
|
# ? Dec 12, 2016 19:24 |
|
I have been tracing mysterious 3 second delays in my newly containerized software stack for a few weeks. Bad timeout management? No, nothing showed up. HTTP server queuing issues? Seems fine. Too much concurrency where not justified? No, all synchronized activities were under low pressure. And it is such a regular 3 seconds. No drift at all! Though sometimes a multiple of 3 seconds. At last, I did what I should have done at the start and took a packet capture. Well what do you know... Windows containers randomly fail to initiate TCP connections. The 3 seconds? That's the automatic retry interval. Mind-boggling how such a failure can happen. So far, it has reproduced on every server I have tried. Anyone seen this failure before?
|
# ? Feb 14, 2017 09:16 |
|
What makes me wonder in such a situation is how the code path for such an error to even exist comes about. I mean networking is such a core OS feature, that stuff should be bulletproof at this point.
|
# ? Feb 17, 2017 13:04 |
|
By any chance do any of you know how to get TeamCity to only trigger on changes to master from Gerrit? From what I understand, the build triggers are looking at the moon language paths that Gerrit creates, and it cannot disambiguate them. I am wondering if there is something I can do with the VCS root instead. We are pushing to master in Gerrit using HEAD:refs/for/master if that helps. I tried to ask in the JetBrains IRC but all that happened in the 30 minutes afterwards is somebody coming on and explaining how to convert to Islam. Also, all men should have a one-fist beard.
|
# ? Mar 16, 2017 20:39 |
|
Does anyone here work somewhere that forces all commits to master to go through a pull request (which has to build green before merging)? Is it good or does it just add more annoying process? Currently we just do something like "mvn install && git push" which runs all our integration tests before pushing which is pretty good at keeping the build green. But it does require discipline.
|
# ? Mar 31, 2017 20:27 |
|
smackfu posted:Does anyone here work somewhere that forces all commits to master to go through a pull request (which has to build green before merging)? Is it good or does it just add more annoying process? Currently we just do something like "mvn install && git push" which runs all our integration tests before pushing which is pretty good at keeping the build green. But it does require discipline. Where I am we have a system that watches for branches a certain way. Merges master into it runs tests then pushes it to master. This takes care of the problem you'll see while scaling. There could be 3 people doing that mvn install at the same time. And 2 of them will need to rerun their tests at best or won't at worst.
|
# ? Mar 31, 2017 20:34 |
|
smackfu posted:Does anyone here work somewhere that forces all commits to master to go through a pull request (which has to build green before merging)? Is it good or does it just add more annoying process? Currently we just do something like "mvn install && git push" which runs all our integration tests before pushing which is pretty good at keeping the build green. But it does require discipline. Integration tests as a gate for merge are bad on big codebases, though. They take a long time. You should have enough unit test coverage to handle most of the clear and obvious build-breaking bugs, and run your integration tests overnight. (This guidance varies if you happen to be doing continuous delivery.)
|
# ? Mar 31, 2017 20:49 |
|
Hughlander posted:Where I am we have a system that watches for branches a certain way. Merges master into it runs tests then pushes it to master. This takes care of the problem you'll see while scaling. There could be 3 people doing that mvn install at the same time. And 2 of them will need to rerun their tests at best or won't at worst. Aye that sounds good. Since our full test suite takes 15 minutes to run, even with our pretty small team we run into that conflict issue and it does waste time. It's kind of a bummer... the BitBucket feature that I am piloting does the build *pre-merge* which is kind of annoying because it still means the merged code can break the master build. Your system seems better.
|
# ? Mar 31, 2017 20:53 |
|
smackfu posted:Aye that sounds good. Since our full test suite takes 15 minutes to run, even with our pretty small team we run into that conflict issue and it does waste time. What I've seen at some companies is a Gated merge, which means everything has to build and pass all the unit tests before it gets merged in the Master branch. Used in conjunction with: https://trunkbaseddevelopment.com/ I don't know the pros and cons of this, but I've seen it used successfully.
|
# ? Mar 31, 2017 21:04 |
|
I think it's a good practice, I implemented it at my current job and everyone likes it. The important thing is that everything you put in the check-in gate is fast and reliable. At a previous job we had two hour check-ins that ran the whole automation suite, they failed 10% of the time on intermittent issues, and the only thing to do was restart from the beginning.
|
# ? Mar 31, 2017 21:52 |
|
Our repos that non-developers touched regularly (mostly website stuff) are set up to only allow pushing to master by merging a PR that has passed tests, mostly because it helps catch mistakes from users who don't know how to use version control and shouldn't need to learn anything more than the absolute basics. With a large number of developers working on a repo you really want a merge queue, but I've always seen gated merges without a merge queue as a lovely attempt at solving a social problem with technology.
|
# ? Mar 31, 2017 21:55 |
|
do people complaining about waiting on integration tests not do code review? we don't merge anything in less than 12 hours (unless it's a critical fix) because all prs have to go through extensive code review. that always takes longer than running integration tests
|
# ? Apr 1, 2017 03:46 |
|
the talent deficit posted:do people complaining about waiting on integration tests not do code review? we don't merge anything in less than 12 hours (unless it's a critical fix) because all prs have to go through extensive code review. that always takes longer than running integration tests
|
# ? Apr 1, 2017 04:04 |
|
I just tend to get really upset at integration tests with spurious failures that keep blocking my build until I go to complain to the nearest manager and in the process of spelling out the issue figure out that it's not spurious after all. gently caress integration tests.
|
# ? Apr 1, 2017 09:43 |
|
Vulture Culture posted:Unsolicited opinion: if a code review takes 12 hours, your change batches are probably too big. Most of the code reviews I submit can be completed in a minute or two (this is obviously not true for enormous refactors) I think he's probably saying people don't jump on the PRs as soon as they're assigned, which seems normal.
|
# ? Apr 1, 2017 15:54 |
|
Vulture Culture posted:Unsolicited opinion: if a code review takes 12 hours, your change batches are probably too big. Most of the code reviews I submit can be completed in a minute or two (this is obviously not true for enormous refactors) we encourage all developers to at least read the commit log for each pr and raise questions/objections if they have them, even if they are not an assigned reviewer. we tend to leave them open until the next morning just to give people an option to review
|
# ? Apr 1, 2017 21:37 |
|
Virigoth posted:Jenkins has a new blog post up for GC tuning on large instances. I"m going to put it on our test server and throw some load at it.
|
# ? Apr 2, 2017 23:00 |
|
At the minimum you need to rerun the tests after making changes requested during review, and the final review pass shouldn't take a significant amount of time.
|
# ? Apr 3, 2017 01:50 |
|
What do people use to store artifacts between build steps? My specific scenario is that I'm in Jenkins and I want there to be two build steps in my pipeline. step 1. if the relevant part of the source tree changed I want to make a pipeline step that creates a vagrant box using packer then publish it somewhere. step 2. Get the latest box from wherever it is published and do stuff My problem is I don't know where to put the box in between the steps. stuff I've looked into:
Is there any way to get maven to work? My googling seemed to tell me that people use maven for non-java artifacts all the time and this would be no problem. Failing that, is there anything like archiva that has an interface for putting a file w/ a version and then also an interface for getting that file back?
|
# ? May 4, 2017 23:41 |
|
We store things as jenkins artifacts and use the copy artifact plugin and it's awful and a common source of spurious build failures.
|
# ? May 4, 2017 23:50 |
|
Plorkyeran posted:We store things as jenkins artifacts and use the copy artifact plugin and it's awful and a common source of spurious build failures. Based on examples I saw I assumed that this was the case, it's why I am hunting for a different solution.
|
# ? May 5, 2017 00:02 |
|
I got maven to behave. If you set up an archiva then here's what you need to do to deploy whatever into it. You need maven 3.1+ because of this bug. This assumes your archiva is running on archiva.myinternaldomain.com:8080, that you set up a user with credentials jenkins/password, and that maven is installed. In this example I'm uploading my packer box.code:
I haven't integrated this process with jenkins yet but I feel fairly good about it. Dren fucked around with this message at 17:40 on May 5, 2017 |
# ? May 5, 2017 17:38 |
|
A cheap hack I used to get around this problem was the "Use custom workspace" option and just set the second job to use the first's workspace. Looks like you solved it in a much cleaner way.
|
# ? May 5, 2017 18:41 |
|
Our TeamCity agents are now described in Packer! No more unicorn build agents On the downside it takes over 20 minutes for a Windows AMI to boot in AWS if it's been sysprepped (2 reboots!). Even with provisioned IOPS
|
# ? May 11, 2017 12:23 |
|
Welp, got a job as a DevOps Engineer despite all of my prior work experience being strictly in development with only side / personal work having anything to do with actual operations/IT. Do any of you have some good recommendations on general material I can read up on to prepare myself for the gig? Specifically general administration stuff regarding DevOps with a focus on CI/CD, Kuberneties, Docker, Terraform, etc. I'd like to start getting together a personal library. This should be fun. Especially the 24hr~ drive and move I get to do this month!
|
# ? May 11, 2017 18:00 |
|
This book is excellent. The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices https://www.amazon.com/dp/B01BJ4V66M/ref=cm_sw_r_cp_apa_sQjfzbWBC57FV The space is moving very fast but it's pretty up-to-date.
|
# ? May 11, 2017 18:09 |
|
Right on! I also picked up The Phoenix Project to listen to for the drive up, as I've heard that's a pretty casual read without being heavily technical. Though I have a feeling The DevOps 2.0 Toolkit might be a bit difficult to do as an audiobook.
|
# ? May 11, 2017 18:15 |
|
Ya it's meant to be hands on, it's less a book and more interactive guide.
|
# ? May 19, 2017 16:05 |
|
|
# ? May 15, 2024 03:56 |
|
Cancelbot posted:Our TeamCity agents are now described in Packer! No more unicorn build agents that sounds excruciating. why are you sysprepping your agents?
|
# ? May 21, 2017 05:07 |