Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Sedro
Dec 31, 2008

Dreadrush posted:

Hi I'm very new to the whole Docker thing and am trying to learn more about it.

I want to be able to deploy an nginx server that will host static files for my website. The static files are compiled by running webpack.

Currently, I have two containers:
web: uses FROM node:latest to take my source files, and builds the dist static files
nginx: uses FROM nginx to run nginx

I have a docker-compose file setup to run the two dockerfiles.

How can "copy" the files generated in the web container to the nginx container? The web container doesn't even have any server running - it has only created static files.
Should I only have one container (nginx) and be generating the static files all inside that?

I tried hosting an express server in the web container, and using a volume to share the generated files, which worked fine, however then on multiple deployments it seemed I had to do extra work to be deleting all the volumes first - it felt like I was doing this wrong. Also in the end all I need is the nginx server with static files - no node server running at any time.

If you're not deploying node, it shouldn't be included in your images.

Generate your static files outside docker and build them into the image.

code:
FROM nginx
COPY dist/ /var/www

Adbot
ADBOT LOVES YOU

Mr. Crow
May 22, 2008

Snap City mayor for life
Alternatively, use volumes and mount the content into the proxy server container.

code:
 docker run -v /content:/var/www nginx

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Dreadrush posted:

Hi I'm very new to the whole Docker thing and am trying to learn more about it.

I want to be able to deploy an nginx server that will host static files for my website. The static files are compiled by running webpack.

Currently, I have two containers:
web: uses FROM node:latest to take my source files, and builds the dist static files
nginx: uses FROM nginx to run nginx

I have a docker-compose file setup to run the two dockerfiles.

How can "copy" the files generated in the web container to the nginx container? The web container doesn't even have any server running - it has only created static files.
Should I only have one container (nginx) and be generating the static files all inside that?

I tried hosting an express server in the web container, and using a volume to share the generated files, which worked fine, however then on multiple deployments it seemed I had to do extra work to be deleting all the volumes first - it felt like I was doing this wrong. Also in the end all I need is the nginx server with static files - no node server running at any time.

Don't deploy nginx for static files unless you can really help it. You'll find it requires more work to maintain than you want and you'll have to do some work to scale it for traffic even if it is as simple as putting containers behind Mezos or Kubernetes.

Instead, use something like S3 or CloudFront (or w/e your cloud provider has) for static stuff and set CORS accordingly.

Dreadrush
Dec 29, 2008
Thanks for your advice. I read a blog post saying that Docker can be used for compiling your application too and not just solely concentrating on what is deployed, but I guess this is not the right way to do it.

Hughlander
May 11, 2005

Dreadrush posted:

Thanks for your advice. I read a blog post saying that Docker can be used for compiling your application too and not just solely concentrating on what is deployed, but I guess this is not the right way to do it.

I used that as a half joke at the otime k this week. Person complaining that he hardest problem with open source was building from source with all the assumptions that aren't documented. And i pitched docker as the one true configure/autoconf.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





docker is fine for compiling, but you should have your container produce whatever artifact you need and then use that artifact in separate containers (or just run it directly). you shouldn't try to compose your build container with your run container. that's the worst of all worlds

Pollyanna
Mar 5, 2005

Milk's on them.


I've got an application that's developed locally under docker-compose, where the application container is kept separate from the database container. I want to try using Bitbucket Pipelines as an automated regression testing system, whereby branches automatically have their specs run. However, I don't see anything out there on integrating an application that uses docker-compose with Pipelines out there, and I don't know enough about Docker to figure out what I need to do to get it to work, aside from the fact that having an application confined to a single container is apparently not what Pipelines expects. Anyone here familiar enough with Docker, Bitbucket, and Pipelines to help me figure out what I need to do?

Virigoth
Apr 28, 2009

Corona rules everything around me
C.R.E.A.M. get the virus
In the ICU y'all......



Jenkins has a new blog post up for GC tuning on large instances. I"m going to put it on our test server and throw some load at it.
GC tuning blog post

Anyone here going to re:invent? I'm looking forward to doing GameDay this year.

Docjowles
Apr 9, 2009

Oh hey that owns. Thanks for sharing.

Mr. Crow
May 22, 2008

Snap City mayor for life
I'm struggling to automate creating VMs on an ESXi, any help would be appreciated.

I'm currently using packer (and new to it) to connect to the server and create a VM. I'm then trying to create a .box from it for vagrant and this is where it's failing, how do I tell vagrant to look on the server for the image and/or to expiry it back to my local machine? Will this custom .box even work and allow vagrant to up into the ESXi server (hoping it somehow baked the credentials into the box but I'm pretty sure this is not going to work in general)?

The biggest problem seems to be not having the ESXi server hooked into vcenter since all of the plugins for vagrant/Ansible that work with ESXi expect to be using vcenter, but getting that done is out of my hands (though something in working on).

theperminator
Sep 16, 2009

by Smythe
Fun Shoe

Mr. Crow posted:

I'm struggling to automate creating VMs on an ESXi, any help would be appreciated.

If you use the Vagrant post-processor in your packer template it should automatically pull it down and do the work I'd think? Can you share your template?

theperminator fucked around with this message at 12:25 on Dec 12, 2016

Mr. Crow
May 22, 2008

Snap City mayor for life

theperminator posted:

If you use the Vagrant post-processor in your packer template it should automatically pull it down and do the work I'd think? Can you share your template?

Ended up getting vCenter installed so it's a non-issue.

I'm also 90% sure what I had in my head wouldn't have worked anyway, not without writing a custom plugin.

I can post what I had for posterity if anyone is curious but I wouldn't recommend that approach.

EssOEss
Oct 23, 2006
128-bit approved
I have been tracing mysterious 3 second delays in my newly containerized software stack for a few weeks. Bad timeout management? No, nothing showed up. HTTP server queuing issues? Seems fine. Too much concurrency where not justified? No, all synchronized activities were under low pressure.

And it is such a regular 3 seconds. No drift at all! Though sometimes a multiple of 3 seconds.

At last, I did what I should have done at the start and took a packet capture. Well what do you know... Windows containers randomly fail to initiate TCP connections. The 3 seconds? That's the automatic retry interval.

Mind-boggling how such a failure can happen. So far, it has reproduced on every server I have tried. Anyone seen this failure before?

Mr Shiny Pants
Nov 12, 2012
What makes me wonder in such a situation is how the code path for such an error to even exist comes about.

I mean networking is such a core OS feature, that stuff should be bulletproof at this point.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
By any chance do any of you know how to get TeamCity to only trigger on changes to master from Gerrit? From what I understand, the build triggers are looking at the moon language paths that Gerrit creates, and it cannot disambiguate them. I am wondering if there is something I can do with the VCS root instead.

We are pushing to master in Gerrit using HEAD:refs/for/master if that helps.

I tried to ask in the JetBrains IRC but all that happened in the 30 minutes afterwards is somebody coming on and explaining how to convert to Islam. Also, all men should have a one-fist beard.

smackfu
Jun 7, 2004

Does anyone here work somewhere that forces all commits to master to go through a pull request (which has to build green before merging)? Is it good or does it just add more annoying process? Currently we just do something like "mvn install && git push" which runs all our integration tests before pushing which is pretty good at keeping the build green. But it does require discipline.

Hughlander
May 11, 2005

smackfu posted:

Does anyone here work somewhere that forces all commits to master to go through a pull request (which has to build green before merging)? Is it good or does it just add more annoying process? Currently we just do something like "mvn install && git push" which runs all our integration tests before pushing which is pretty good at keeping the build green. But it does require discipline.

Where I am we have a system that watches for branches a certain way. Merges master into it runs tests then pushes it to master. This takes care of the problem you'll see while scaling. There could be 3 people doing that mvn install at the same time. And 2 of them will need to rerun their tests at best or won't at worst.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

smackfu posted:

Does anyone here work somewhere that forces all commits to master to go through a pull request (which has to build green before merging)? Is it good or does it just add more annoying process? Currently we just do something like "mvn install && git push" which runs all our integration tests before pushing which is pretty good at keeping the build green. But it does require discipline.
Gerrit. Process is good if it keeps people from doing dumb poo poo like hurriedly committing untested, broken code and breaking the build for everyone else.

Integration tests as a gate for merge are bad on big codebases, though. They take a long time. You should have enough unit test coverage to handle most of the clear and obvious build-breaking bugs, and run your integration tests overnight. (This guidance varies if you happen to be doing continuous delivery.)

smackfu
Jun 7, 2004

Hughlander posted:

Where I am we have a system that watches for branches a certain way. Merges master into it runs tests then pushes it to master. This takes care of the problem you'll see while scaling. There could be 3 people doing that mvn install at the same time. And 2 of them will need to rerun their tests at best or won't at worst.

Aye that sounds good. Since our full test suite takes 15 minutes to run, even with our pretty small team we run into that conflict issue and it does waste time.

It's kind of a bummer... the BitBucket feature that I am piloting does the build *pre-merge* which is kind of annoying because it still means the merged code can break the master build. Your system seems better.

Mr Shiny Pants
Nov 12, 2012

smackfu posted:

Aye that sounds good. Since our full test suite takes 15 minutes to run, even with our pretty small team we run into that conflict issue and it does waste time.

It's kind of a bummer... the BitBucket feature that I am piloting does the build *pre-merge* which is kind of annoying because it still means the merged code can break the master build. Your system seems better.

What I've seen at some companies is a Gated merge, which means everything has to build and pass all the unit tests before it gets merged in the Master branch.

Used in conjunction with: https://trunkbaseddevelopment.com/

I don't know the pros and cons of this, but I've seen it used successfully.

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!
I think it's a good practice, I implemented it at my current job and everyone likes it. The important thing is that everything you put in the check-in gate is fast and reliable. At a previous job we had two hour check-ins that ran the whole automation suite, they failed 10% of the time on intermittent issues, and the only thing to do was restart from the beginning.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Our repos that non-developers touched regularly (mostly website stuff) are set up to only allow pushing to master by merging a PR that has passed tests, mostly because it helps catch mistakes from users who don't know how to use version control and shouldn't need to learn anything more than the absolute basics.

With a large number of developers working on a repo you really want a merge queue, but I've always seen gated merges without a merge queue as a lovely attempt at solving a social problem with technology.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





do people complaining about waiting on integration tests not do code review? we don't merge anything in less than 12 hours (unless it's a critical fix) because all prs have to go through extensive code review. that always takes longer than running integration tests

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

the talent deficit posted:

do people complaining about waiting on integration tests not do code review? we don't merge anything in less than 12 hours (unless it's a critical fix) because all prs have to go through extensive code review. that always takes longer than running integration tests
Unsolicited opinion: if a code review takes 12 hours, your change batches are probably too big. Most of the code reviews I submit can be completed in a minute or two (this is obviously not true for enormous refactors)

Vanadium
Jan 8, 2005

I just tend to get really upset at integration tests with spurious failures that keep blocking my build until I go to complain to the nearest manager and in the process of spelling out the issue figure out that it's not spurious after all. gently caress integration tests.

Mr. Crow
May 22, 2008

Snap City mayor for life

Vulture Culture posted:

Unsolicited opinion: if a code review takes 12 hours, your change batches are probably too big. Most of the code reviews I submit can be completed in a minute or two (this is obviously not true for enormous refactors)

I think he's probably saying people don't jump on the PRs as soon as they're assigned, which seems normal.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Vulture Culture posted:

Unsolicited opinion: if a code review takes 12 hours, your change batches are probably too big. Most of the code reviews I submit can be completed in a minute or two (this is obviously not true for enormous refactors)

we encourage all developers to at least read the commit log for each pr and raise questions/objections if they have them, even if they are not an assigned reviewer. we tend to leave them open until the next morning just to give people an option to review

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS

Virigoth posted:

Jenkins has a new blog post up for GC tuning on large instances. I"m going to put it on our test server and throw some load at it.
GC tuning blog post
this is great, ty

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
At the minimum you need to rerun the tests after making changes requested during review, and the final review pass shouldn't take a significant amount of time.

Dren
Jan 5, 2001

Pillbug
What do people use to store artifacts between build steps?

My specific scenario is that I'm in Jenkins and I want there to be two build steps in my pipeline.
step 1. if the relevant part of the source tree changed I want to make a pipeline step that creates a vagrant box using packer then publish it somewhere.
step 2. Get the latest box from wherever it is published and do stuff

My problem is I don't know where to put the box in between the steps.

stuff I've looked into:
  • As I understand it stash/unstash don't work because they don't persist between builds and because they're bad for large files.
  • The external workspace manager plugin seems like it might work but could be messy. It's also way overkill, I want to save off one file not deal with the entire workspace persisting and whatever weird side effects that will have.
  • storing as a jenkins artifact and using the copy artifact plugin - seems like it might work if I hack around with ${BUILD_REVISION}
  • using apache archiva - I got archiva set up, used maven to deploy my artifact with mvn deploy:deploy-file then... I'm stuck. There doesn't seem to be a way to download an artifact from maven. depedency:get fails because the artifact is not in central. dependency:copy fails because I don't have a pom file for my project (it's not a java project, I don't want a pom file). The archiva REST API gives me an error 204 when I try to download the artifact with wget.

Is there any way to get maven to work? My googling seemed to tell me that people use maven for non-java artifacts all the time and this would be no problem. Failing that, is there anything like archiva that has an interface for putting a file w/ a version and then also an interface for getting that file back?

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
We store things as jenkins artifacts and use the copy artifact plugin and it's awful and a common source of spurious build failures.

Dren
Jan 5, 2001

Pillbug

Plorkyeran posted:

We store things as jenkins artifacts and use the copy artifact plugin and it's awful and a common source of spurious build failures.

Based on examples I saw I assumed that this was the case, it's why I am hunting for a different solution.

Dren
Jan 5, 2001

Pillbug
I got maven to behave. If you set up an archiva then here's what you need to do to deploy whatever into it. You need maven 3.1+ because of this bug. This assumes your archiva is running on archiva.myinternaldomain.com:8080, that you set up a user with credentials jenkins/password, and that maven is installed. In this example I'm uploading my packer box.

code:
create ~/.m2/settings.xml:                                                      
                                                                                
    <settings>                                                                  
        <servers>                                                               
            <server>                                                            
                <id>archiva.internal</id>                                       
                <username>jenkins</username>                                  
                <password>password</password>                                   
            </server>                                                           
            <server>                                                            
                <id>archiva.snapshots</id>                                      
                <username>jenkins</username>                                  
                <password>password</password>                                   
            </server>                                                           
        </servers>                                                              
    </settings> 

upload packer artifact to archiva:                                                         
                                                                                
    cd ~/projects/packer
    mvn deploy:deploy-file -Dfile=packer_centos7_virtualbox.box -DrepositoryId=archiva.snapshots -Durl=http://archiva.myinternaldomain.com:8080/repository/snapshots/ -DgroupId=com.myinternaldomain.packer -DartifactId=packer-centos7-virtualbox -Dversion=1.0

look at packer artifact on the web:                                             
                                                                                
    [url]http://archiva.myinternaldomain.com:8080/repository/snapshots[/url]                    
    [url]http://archiva.myinternaldomain.com:8080/repository/snapshots/com/myinternaldomain/packer/packer-centos7-virtualbox/1.0/[/url]

download packer artifact:                                                       
                                                                                
    mvn org.apache.maven.plugins:maven-dependency-plugin:3.0.0:get -DremoteRepositories=http://archiva.myinternaldomain.com:8080/repository/snapshots -Dartifact=com.myinternaldomain.packer:packer-centos7-virtualbox:1.0:box
    mvn org.apache.maven.plugins:maven-dependency-plugin:3.0.0:copy -Dartifact=com.myinternaldomain.packer:packer-centos7-virtualbox:1.0:box -DoutputDirectory=. -Dmdep.stripVersion=true
Some things to note:
  • There might be a way to not store the pw in plaintext but it looked like a pita to set up.
  • You need to use the snapshots repository if you want to be able to reupload and overwrite the artifact (which is important for a CI pipeline). There seem to be two modes for maven repositories, snapshot and release. Snapshot lets you overwrite stuff, release is permanent (unless an admin intervenes).
  • The group id can be whatever you want. Maven seems to identify artifacts primarily by group id, artifact name, and version. There are some other fields like classifier, I saw something that said classifier should map to the extension of the artifact (default is jar).
  • Don't specify a classifier on the deploy-file step, it will mess with the filename and you won't be able to download the artifact with depedency:get.
  • Do specify the classifier on the dependecy:get and dependency:copy steps.
  • The fully qualified name for the dependency plugin was necessary for me because the version I got when I used dependency:copy required a POM file to be in my project. The 3.0.0 version does not.
  • -Dmdep.stripVersion=true gives you back the original filename. If you don't use it -1.0 (or whatever your version is) gets appended to the filename.
  • The dependency:get step downloads the artifact to your local repository at ~/.m2/repository. That repository filling up could become a problem, I don't know. My solution will probably be to rm -rf it from time to time.

I haven't integrated this process with jenkins yet but I feel fairly good about it.

Dren fucked around with this message at 17:40 on May 5, 2017

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
A cheap hack I used to get around this problem was the "Use custom workspace" option and just set the second job to use the first's workspace. Looks like you solved it in a much cleaner way.

Cancelbot
Nov 22, 2006

Canceling spam since 1928

Our TeamCity agents are now described in Packer! No more unicorn build agents :woop:

On the downside it takes over 20 minutes for a Windows AMI to boot in AWS if it's been sysprepped (2 reboots!). Even with provisioned IOPS :(

space kobold
Oct 3, 2009


Welp, got a job as a DevOps Engineer despite all of my prior work experience being strictly in development with only side / personal work having anything to do with actual operations/IT.

Do any of you have some good recommendations on general material I can read up on to prepare myself for the gig? Specifically general administration stuff regarding DevOps with a focus on CI/CD, Kuberneties, Docker, Terraform, etc. I'd like to start getting together a personal library.

This should be fun. Especially the 24hr~ drive and move I get to do this month!

Mr. Crow
May 22, 2008

Snap City mayor for life
This book is excellent.

The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices https://www.amazon.com/dp/B01BJ4V66M/ref=cm_sw_r_cp_apa_sQjfzbWBC57FV

The space is moving very fast but it's pretty up-to-date.

space kobold
Oct 3, 2009


Right on! I also picked up The Phoenix Project to listen to for the drive up, as I've heard that's a pretty casual read without being heavily technical. Though I have a feeling The DevOps 2.0 Toolkit might be a bit difficult to do as an audiobook.

Mr. Crow
May 22, 2008

Snap City mayor for life
Ya it's meant to be hands on, it's less a book and more interactive guide.

Adbot
ADBOT LOVES YOU

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison

Cancelbot posted:

Our TeamCity agents are now described in Packer! No more unicorn build agents :woop:

On the downside it takes over 20 minutes for a Windows AMI to boot in AWS if it's been sysprepped (2 reboots!). Even with provisioned IOPS :(

that sounds excruciating. why are you sysprepping your agents?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply