Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Is there anything out there that spells out how to do a gated check-in between git, Gerrit, and TeamCity? I want to have a pushed commit stage in Gerrit for review, but I want TeamCity to then turn around and run the built-in tests. I'd prefer that the report somehow show up in Gerrit. The reviewer can then see how well the change worked against the repository's tests before possibly reviewing broken code.

Adbot
ADBOT LOVES YOU

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
By any chance do any of you know how to get TeamCity to only trigger on changes to master from Gerrit? From what I understand, the build triggers are looking at the moon language paths that Gerrit creates, and it cannot disambiguate them. I am wondering if there is something I can do with the VCS root instead.

We are pushing to master in Gerrit using HEAD:refs/for/master if that helps.

I tried to ask in the JetBrains IRC but all that happened in the 30 minutes afterwards is somebody coming on and explaining how to convert to Islam. Also, all men should have a one-fist beard.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
We're about to expand our TeamCity suite to include things like ongoing rolling regressions. We can represent these in TeamCity itself, but this is hitting my sense of smell based on what I read on here awhile ago. I get the impression we should implement most of this rolling regression in a script with a data file in source control. The data file can be adjusted per-commit to ensure that we are testing what's on the HEAD. We've already had issues with somebody changing the test plan in TeamCity and having it -1 all inbound reviews due to a global regression. We had to fix that regression, but none of the commits were targeting that. So I figured instead that being able to pair this QA suite with the current state of the code will be a bulwark against that.

The general consensus I've seen is to kind of keep these tools at arm's reach. Use them--yes, but don't try to put everything into them. I don't entirely understand why. I just see more people complaining about having gone all-in and then stepping back versus people making fun of others for not completely committing into putting everything in the CI tool itself.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
The big thing here is somebody would want to add a step towards a regression, but we don't want unrelated commits that don't even have the matching code to fail. I expect the incompatible code to get through because it has already happened two of three times we did this. We were about to start adding more steps more often, and I don't want it to become a game of us having to get together and turn our keys at the same time.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Let's say I want to spin up a bunch of VMs to test a code commit. I was thinking there has to be some standard for dynamically getting these things instead of setting aside X amount of nodes to just be always-present to do this. I imagine this can be throttled and such as needed, and the resources can be released when not in use. What are the magic words for this? For what I'm doing, I think I need to use vagrant with Virtualbox due to having some GUIs involved. I'm trying to figure out how to request for this and have the right person on the other end understand it.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
No they're standalone applications. Some of them have GUI automation around them and I want to make sure it all still works. It's definitely unusual so I can't even get mad that keeps coming up. I couldn't use Docker in one hop because of that. I don't know if I could instead, say, boot up a container and run Virtualbox from there. Maybe? I don't think that gives me too much there.

I think we have some kind of access to Azure and the question is what are the magic words to pass along through various IT requests to get a fighting chance of somebody instantly recognizing what I'm trying to do here. I was thinking something like "dynamic node requests" or something.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

chutwig posted:

Putting the Fear of The Lord in me.

I should just explain a bit more about what I'm trying to do and roughly how I would expect it would happen--without knowing much about how this is actually done.

When I get a new code commit, I want to run a QA suite I have as a script on a Windows and two specific Linux environments. I need to also test with Python 2.7 and Python 3.6. Part of the software installs a lot of other software that it wraps and I am testing that integration. The actual installation process for those applications is part of the test so I don't want to deploy them on the images ahead of time. Afterwards, I want to extract the build artifacts from the VMs and post them to our artifact repository. So the procedure will be something like:

1. Spin up VM
2. Inject pending code
3. Run QA suite, which will do unit tests, build on the machine (this is another requirement), run a regression that will install this other software as it goes
4. Extract and post build artifacts.

Due to the applications being classic GUI applications and GUI automation is involved in the software I am testing, I don't think I can just use Docker containers. I don't know if I could bring up a container with Virtualbox and then run my stuff in there. I thought I'd use vagrant since it sounded like it could do the insertion and extraction fairly easily to a pile of pre-created VirtualBox VMs.

Since this is done reactively to code commits, I figured I could dynamically request resources on which to run these and release them when they're done. Failing that, I'd just have one static setup and pending QA runs will just have to queue up.

How I expect this to work:
1. A single core's worth of VM resource--heck, even a shared resource should work--is running our QA agent.
2. The QA agent receives notice of a new commit.
3. It accesses a cloud API to request VM resources per OS/Python permutation it has to run
4. It gives each one a VM image. The lazy thing here would be to have the three test OSes on the same image and just have the QA agent machine tell it which one to actually use for each instance.
5. Part of the launch process would be a call back to the QA agent when these are done about the result.
6. The QA agent would decommission the nodes--possibly after some checks.

I don't expect even tens of commits to come through at one time so the master agent doesn't have to be powerful. It can be a static instance that's always on; I imagine it would have to be in order to respond to these requests from the source control management software.

uncurable mlady posted:

you need azure virtual machines (if you're using teamcity or jenkins or w/e there's probably a way to have it automagically create a vm for you when a build starts), and presumably this is a windows application so you'll need to perform some fuckery to have automatically log in to an interactive session. at that point you'd use whatever your test automation bullshit is to run the tests.

The OSes I'm using for testing aren't necessarily the best for a server deployment so I don't think I could just make the VM nodes themselves run the OSes. Rather, I imagine I would bring up some robust, server OS and then subvirtualize a VM for each OS/Python permutation to do what I need to do. I know this metavirtualization thing makes things more complicated.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I have some mix of surprised and unsurprised hearing about this. I know I'm trying to do something goofy with VMs. If I was just deploying some kind of service without these OS constraints, then we'd probably already have Docker containers in mind. We actually have a lot of lab infrastructure to house a server, but we didn't want it as some ball and chain to deal with. It would also be more idle than not most of the time, but I guess that's the cost of doing what we're trying to do.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Well, I managed to get vagrant to boot up VMs for testing our code individually, but I'm trying to figure out how to combine all this so I launch all test VMs simultaneously. Vagrant has a multi-vm mode where I just specify all the different configurations in on Vagrantfile. I tried this and it appeared to try to run them serially. I put my test in the provisioning logic, so I am not too surprised something like that might happen. However, I wanted to double check if that's the case. Generally, does vagrant provision boxes sequentially?

My other situation is figuring out the best way to run each of the different VMs in a different mode. Specifically, I need to test both for Python 2 and Python 3, and I'm trying to do them on separate VMs. The stuff I am running can switch modes basically with an environment variable. Does anybody know a good way to duplicate each VM with a different toggle like that?

I'm trying to avoid having to write a bunch of automation around this. I'm already bummed about the multi-vm sequential thing and hope I'm just wrong. Being able to kick the whole thing off from one Vagrantfile would be great.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

chutwig posted:

Serial vs parallel depends on the provider. VirtualBox (the default) is serial only. Other ones, particularly ones that are just making API calls to a thing, can usually run in parallel.

Fundamentally a Vagrantfile is just Ruby, so once you unravel how it sets up the VMs, you can write whatever loops and such that you want to set up lots of VMs.

Ahh that's good to know. I'm awful at Ruby but I did manage to squeeze out a little code to try to do something involving loading the files separately into one Vagrantfile. However, it's pointless if they run sequentially. I'm betting that specifying all this in separate threads doesn't mean I can make them run in parallel due to what are probably serious race conditions in the Virtualbox provider.

I'm assuming that if I'm launching these from separate processes that I'll have to manage assigning the forwarded WinRM and SSH ports too, but I guess I'll try it without any management like an animal first.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
"Oh hey I haven't looked a the CI thread in awhile. Maybe I should close it since I've just about finished up my prototype..."

fletcher posted:

You may want to look into test kitchen: https://kitchen.ci/

It's built by the Chef guys but I don't think you necessary have to use Chef with it. It works great for spinning up a bunch of VMs in parallel, running stuff on them, and then tearing them down.

Gaaaaaaaaaaah why did I have to see this now?

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I'm trying to understand the GitLab Runner without a formalized training--particularly the configuration YAML file. It's been doing stuff in a way I didn't really expect. Most recently, I attempted to launch my tests in a two-stage process: setup and test. I defined these in a stages section like I saw in the documentation. The test section depends on the setup phase and runs much longer. It has commands in its after_script block that rely on collateral acquired in the setup phase. Just about everything ran as expected, but when it came time to run the after_script stuff, it couldn't find an application it had downloaded in setup. Actually, I should just try to paraphrase with an example:

code:
stages:
  - setup
  - test

setup_regressions:
  stage: setup
  script:
    - curl -fL [url]https://getcli.jfrog.io[/url] | sh
    - ./jfrog rt config --url=$ARTIFACTORY_URL --user=$ARTIFACTORY_USER --password=$ARTIFACTORY_PASS derp-pary
    - ./jfrog rt c show
  
vm_regressions:
  stage: test

  script:
    - python3 cicd/vm_regression.py --flags_out_the_whazoo

  after_script:
    - find /tmp/$CI_PIPELINE_ID -name *.box -exec rm {} \;
    - ./jfrog rt u /tmp/$CI_PIPELINE_ID/ fun_repo/gitlab/$CI_PIPELINE_ID/
    - rm -Rf /tmp/$CI_PIPELINE_ID

  only:
    - master
    - vm_hardening

This is edited from the original, sanitized and simplified.

I can see setup_regressions runs and jfrog outputs some stuff. It's definitely there at that point. However, the command is not found when it's run in the after_script section of vm_regressions. Why is this?

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

fluppet posted:

You're not caching/artifacting and items between stages so unless you are running both stages on single runner in a shared workspace it wont work

The real file does have a cache but only for pip for some Python modules. So I guess that's something I need to look up. As for the runner and workspace: it is a single runner right now, but I don't understand shared workspaces. I just this proposal for them, which makes me think they aren't formally a thing. I'm doing most of my work under /tmp/CI_PIPELINE_ID, but that jfrog client is clearly being downloaded and installed from wherever the runner has decided to run. So should I be mentioning it in the cache?

Also, I don't really have a bunch of parallel actions that the GitLab Runner can manage, so should I even bother splitting this all up? Right now, I'm trying it as one big step with before_script, script, and after_script sections. The bulk of the parallel steps for launching all the VMs is being done by a Python script in the main payload.

Adbot
ADBOT LOVES YOU

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I'm hoping somebody here can explain the process and experience for generating some documentation in GitLab (GitLab Pages) to be hosted with it.

I have a bunch of Markdown that I can point to from the HEAD of master in my repository. However, I also want to at least:
1. Show a table I have to generate based on the current state of the repository. It serves as something like a catalog.
2. Post the API documentation some place.

I believe I need to create a separate pages project for this and just link to it from my Markdown. Do I put stuff in my source project's .gitlab-ci YAML to create and publish that? How do I point it to the pages project? Can I do this without a separate pages project?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply