|
We're using artifactory because it allows us to host all of our other repos (rpm, nuget, gem, etc) at the same time and because it's S3 backed it's fairly economical and we don't really have to worry about space issues.
|
# ? Feb 8, 2019 23:28 |
|
|
# ? Jun 6, 2024 06:01 |
|
Cloud Artifactory is hideously expensive in bandwidth and storage costs but does have tools for auditing your software supply chain similar to Sonatype stuff. If all you do is stuff things into a bucket and have metadata to describe them as your repo solution, self hosting may not be the worst idea. When you start looking for stuff like security analysis, provenance, etc then it makes sense to use the expensive stuff (which will probably also suck as general purpose repos). I think it’d be cheaper overall to use those tools as mirrored repos (backups) that are off of your primary day-to-day stuff from other sources and you can get the usability of non-enterprise CIO-ware with the CYA of your CIO-ware. Just need to make sure you can prove beyond reasonable doubt that artifact A corresponds at all times to artifact B in the mirror.
|
# ? Feb 8, 2019 23:48 |
|
Vulture Culture posted:Completely, totally incorrect. Logstash's ES output in HTTP mode will respect username and password in any URL in the hosts option, or via the user and password fields. If you'd rather use a token than basic auth, you can use custom_headers. Beats has username and password options for Elasticsearch, or it can do TLS-based client authentication against Logstash's Beats input. Late reply but thank you for the correction, this is good to know!
|
# ? Feb 10, 2019 11:14 |
|
Warbird posted:Everything's great except the fact that Github is blocked on the network for christ knows what reason so is the specific tool I was hired to work with. I've been advised to do research on my personal laptop and just email myself the code snippets I'm interested in. Ironically often the reason GitHub is blocked is because people check company code into it so they can work on it at home. Now that they have free private repos maybe this will be less of a problem.
|
# ? Feb 10, 2019 15:12 |
|
We use Amazon's ECR as we are becoming an Amazon shop and it mostly just works with kops and IAM creds
|
# ? Feb 10, 2019 19:23 |
|
smackfu posted:Ironically often the reason GitHub is blocked is because people check company code into it so they can work on it at home. Now that they have free private repos maybe this will be less of a problem.
|
# ? Feb 10, 2019 23:36 |
|
Bhodi posted:people should know by now that you can't block stupid
|
# ? Feb 11, 2019 02:03 |
|
We run GitHub Enterprise in our DC and it's pretty nice, but we're a big Fortune 50 and I don't get any greens on my public GitHub profile now
|
# ? Feb 11, 2019 02:34 |
|
Vulture Culture posted:I think the least costly way for companies to fix this would be to structure incentives so people don't need to continue working when they leave the office or take a sick day smackfu posted:Ironically often the reason GitHub is blocked is because people check company code into it so they can work on it at home. Now that they have free private repos maybe this will be less of a problem. Maybe I'm really out of touch but I haven't had anything but a laptop for a company machine for over a decade now, so if you wanted to work on code at home generally you've been fine to do some local development at least at home. Do any software engineers besides CAD, game, and machine learning type folks get desktops from their employers these days?
|
# ? Feb 11, 2019 03:28 |
|
The only place I've seen desktops was at the railroad some years ago and even then only for the contract people. The unions there were super weird so those machines could only access 3 web pages and nothing else. Back on topic, can anyone recommend some required Powershell reading? I'm doing Chocolatey packaging these days and some of the customizations I'm having to make are starting to stretch my abilities with working within the language.
|
# ? Feb 11, 2019 04:15 |
|
necrobobsledder posted:Maybe I'm really out of touch but I haven't had anything but a laptop for a company machine for over a decade now, so if you wanted to work on code at home generally you've been fine to do some local development at least at home. Do any software engineers besides CAD, game, and machine learning type folks get desktops from their employers these days? Desktops are still frequently seen in fintech because of paranoia about IP walking. In this case it’s a security thing rather than a performance thing. If you’re determined you can still figure out ways to get stuff out, and attitudes are shifting, but it’s an ongoing process. When I started in fintech 4 years ago at my current company, everything was desktops. I was one of the first to pilot laptops, and now probably 40% of engineering has a laptop instead of a desktop. As recently as last year when I talked to one particular hedge fund, they still used only desktops and required people to RDP in if they wanted to work remotely.
|
# ? Feb 11, 2019 04:30 |
|
Warbird posted:The only place I've seen desktops was at the railroad some years ago and even then only for the contract people. The unions there were super weird so those machines could only access 3 web pages and nothing else. Don't know of any specific texts but the Powershell thread might have some info: https://forums.somethingawful.com/showthread.php?threadid=3286440
|
# ? Feb 11, 2019 19:01 |
|
PowerShell in a Month of Lunches is the book goons always seem to rave about.
|
# ? Feb 11, 2019 19:10 |
|
Gods help me, I discovered Gravity from a random link yesterday and it's the first tool in the K8S ecosystem that ever looked tempting to me. That is, it seems to actually make using K8S simpler rather than piling more abstraction / configuration on top of it. Has anyone tried it?
|
# ? Feb 12, 2019 10:26 |
|
Docjowles posted:PowerShell in a Month of Lunches is the book goons always seem to rave about. Oh right, I remember hearing something about that a few years ago. Thanks for the reminder! I’ll go eyeball the power shell thread as well.
|
# ? Feb 12, 2019 15:14 |
|
Ugh, I feel like our dev ops is turning back into dev and ops. Originally one dev team that was ahead of the curve wrote all the code for their Jenkins jobs and had admin access to Jenkins. But as time has gone on, more developer teams have started using the existing jobs (which do work great) but not needing to write anything or being admins. To the point where we have a devops slack channel where developers are complaining that the Jenkins queue is too long and someone else from the original team now acting as ops promises to look into it. Not sure what the fix is.
|
# ? Feb 16, 2019 16:55 |
|
If you have matrix authentication enabled you can try to give namespaces for different groups letting them have admin access. Really, delegation is important for scaling out permissions and giving transitive trust. Start with trusting the original group and have them deputized. My devs come to me when they have issues with network, firewalls, badly behaving external services, and reliability of nodes and that’s good enough separation of duties for me. If there’s jobs queued up for a while sounds like people are getting impatient (people here get antsy if a job is queued for more than 2 minutes when it takes 5 minutes to sense and launch a new node in the spot fleet). Perhaps you should be using pipeline libraries more and then developers can try to work with each other more to share build code and they start to learn as a community better?
|
# ? Feb 16, 2019 18:34 |
|
Thanks for the good ideas. I think another contributor is that there is just a single Jenkins executor pool so one team can impact every other team and the “ops” people need to sort it out. If our team had a dedicated slave we wouldn’t have to go running for help as often.
|
# ? Feb 17, 2019 18:49 |
|
Oh man, no wonder you have problems with queueing up jobs. I solved this with a Jenkins focused EC2 spot fleet and we get a new slave up if a job is queued up for longer than 5 minutes or so (this mechanism is not well documented nor tunable at all unfortunately). All of our jobs are mutually exclusive to any other due to bad usage of a global .m2 repo directory so many of our nodes sit idle 70%+ of the time and then burst through 10+ jobs at once. We have a problem tracking code quality deltas wrt the primary branch and wind up needing to build the mainline branches alongside PRs so we need two nodes at a time to do a single build with the right feedback. Other options include using K8S based Jenkins but given not all builds can be containerized effectively that may be a no-go as well.
|
# ? Feb 17, 2019 21:01 |
|
necrobobsledder posted:Oh man, no wonder you have problems with queueing up jobs. I solved this with a Jenkins focused EC2 spot fleet and we get a new slave up if a job is queued up for longer than 5 minutes or so (this mechanism is not well documented nor tunable at all unfortunately). All of our jobs are mutually exclusive to any other due to bad usage of a global .m2 repo directory so many of our nodes sit idle 70%+ of the time and then burst through 10+ jobs at once. We have a problem tracking code quality deltas wrt the primary branch and wind up needing to build the mainline branches alongside PRs so we need two nodes at a time to do a single build with the right feedback. K8s Jenkins is an extremely bad idea
|
# ? Feb 18, 2019 04:02 |
|
What’s everyone using for ingress with k8s? I’ve been using nginx and we’re not doing anything crazy with it, just wondering if there are any clear advantages to that or something else like traefik?
|
# ? Feb 18, 2019 04:08 |
|
Spring Heeled Jack posted:What’s everyone using for ingress with k8s? I’ve been using nginx and we’re not doing anything crazy with it, just wondering if there are any clear advantages to that or something else like traefik?
|
# ? Feb 18, 2019 04:11 |
|
Spring Heeled Jack posted:What’s everyone using for ingress with k8s? I’ve been using nginx and we’re not doing anything crazy with it, just wondering if there are any clear advantages to that or something else like traefik? Haproxy and it works great except for now we use bgp announcing kube-router and the ingress is kind of pointless with routable pods and services
|
# ? Feb 18, 2019 07:15 |
|
NihilCredo posted:Gods help me, I discovered Gravity from a random link yesterday and it's the first tool in the K8S ecosystem that ever looked tempting to me. That is, it seems to actually make using K8S simpler rather than piling more abstraction / configuration on top of it. Has anyone tried it? I worked at gravitational, gravity is baller. Just saw they open sourced the community version, which rules. Happy to talk more about it if you have questions.
|
# ? Feb 18, 2019 07:17 |
|
Mao Zedong Thot posted:Haproxy and it works great except for now we use bgp announcing kube-router and the ingress is kind of pointless with routable pods and services Noooooo...... Why....... I’m so sorry. Fwiw I’ve pushed us towards nginx just because it’s simple and supports our WAF. Istio gateway is terrible and you should avoid it.
|
# ? Feb 18, 2019 07:17 |
|
Spring Heeled Jack posted:What’s everyone using for ingress with k8s? I’ve been using nginx and we’re not doing anything crazy with it, just wondering if there are any clear advantages to that or something else like traefik? Traefik's worked pretty well for us for L7. I usually deploy it with a wildcard TLS certificate for a given subdomain and then everyone who registers an ingress under that subdomain gets TLS termination for free. The main downside is you still need some sort of exterior LB to direct traffic to the various ingresses. We use DNS load balancing with an occasional health check, but you could probably do a lot better with an F5 pair, or ELB, or basically anything else. Mao Zedong Thot posted:Haproxy and it works great except for now we use bgp announcing kube-router and the ingress is kind of pointless with routable pods and services hissssssssssssssssssssssssss While I'm a big fan of Calico, I view routable pods/services as breaking encapsulation. I don't want anyone pointing directly to a particular pod and then complaining when that pod crashes, or a deployment recycle changes all the pod identities and CNI hands out new IPs. I want traffic into a cluster to come in through static entry points and then get routed from there; ideally nobody using an application in the cluster should have to know anything but a hostname to connect to. Obviously this is kind of dogmatic and your situation may be different, but it would take a fair amount of activation energy to convince me that the right solution involves bypassing ingress controllers.
|
# ? Feb 18, 2019 18:17 |
|
Have your service IP subnet range be public and ECMP route to it serious answer, I think bgp peering your k8s service IP range to the rest of your internal datacenter with ECMP is actually the best way to do it. You can set up a scheme where real DNS records are created for Service IPs in addition to the normal service-name.namespace.cluster.local ones you normally get. Routing to pods directly is super dumb, but routing to service IPs is fine. Its more complicated and lovely when you need to receive from internet clients and not just other things in your DC. I really like Github's approach to how they do it. https://githubengineering.com/introducing-glb/ https://githubengineering.com/glb-director-open-source-load-balancer/ https://github.com/github/glb-director https://githubengineering.com/kubernetes-at-github/ This is all for if you're doing it yourself on prem. If you're in aws please just use elb. Methanar fucked around with this message at 19:40 on Feb 18, 2019 |
# ? Feb 18, 2019 19:31 |
|
We are using zalando’s “skipper” project for ingress. I don’t know why, and the people that made that decision have left the company. I’m in a position to change it up if there is a good reason. Anyone else familiar with it at all? It’s working fine and I have no complaints. Just curious since I haven’t heard it mentioned much. But zalando is super active in the k8s community so it’s not total abandonware at least. We are in AWS so it’s fronted by an ALB.
|
# ? Feb 19, 2019 01:39 |
|
Anyone using CircleCI heavily? We're piloting and generally pleased - about to pay into them but want to see if there are any worts I'm missing first
|
# ? Feb 19, 2019 02:20 |
|
Posting this here as the Powershell thread appears to be hella dead: Any of you folks use Chocolatey? I've got an Oracle client that is being an absolute fucker and I'm fairly sure that I'm missing something. We have n response file for the install, but the executable requires a full path to said file. No ./tools/foo.rsp here. I can run the install straight from PS just fine, but once it goes into Chocolatey it breaks all to hell; the exit code is consistent with being unable to find the response file. I've copied the file to C: just to have it in a static place outside of the packaging process, but it still fails during packing install. Anyone have any suggestions on potential next steps to resolve? I'm currently out of ideas.
|
# ? Feb 19, 2019 05:21 |
|
Methanar posted:This is all for if you're doing it yourself on prem. If you're in aws please just use elb.
|
# ? Feb 19, 2019 12:45 |
|
ELB is the only AWS LB that lets you use both TCP and HTTP based targets from the same LB. Most of the stock K8S ingress controllers I’ve seen only supported classic ELBs anyway and wouldn’t use the newest ALB based ingress controllers. The ingress controllers from AWS want you to use their CNI for the network overlay and that’s just asking for issues at this point unless you have some serious discipline to avoid the drawbacks and limitations.
|
# ? Feb 19, 2019 13:52 |
|
Warbird posted:Posting this here as the Powershell thread appears to be hella dead: Is the answerfile included in the package? Can you use Resolve-Path on ./tools/foo.rsp to get the absolute path and pass that in? If you're passing it in through $silentArgs you might have to watch how quotes and variables get escaped too. Remember, ChocolateyInstall.ps1 is just a Powershell script, so any pre-processing or input sanitation you could do in PS you can do here.
|
# ? Feb 19, 2019 18:12 |
|
Way ahead of you there friend. I think the fellow that was working on the package before me messed something up in the nuspec or install file and I didn't notice. I did a clean start with the package generator I wrote and it's breaking in a far more reasonable way that's telling me what it has issues with now. Hopefully should just be a matter of time, but man has this been a pain. Meta talk, a manager from another team has been trying to poach me to come over and work on Ansible based stuff as opposed to just windows (choco) packaging. I'm assuming this would be a better career move as Ansible has wider applications and adopotions than Chocolatey at this point. Plus I could use more experience in the provisioning world. Thoughts?
|
# ? Feb 19, 2019 20:44 |
|
Choco is like apt-get, ansible is way different than just a package manager.
|
# ? Feb 19, 2019 21:27 |
|
Being a full-time Chocolatey packager sounds like an extremely boring and limited role (I could be totally wrong, I don't mean to poo poo on your job), and unlikely to translate well into new roles if/when it's time for you to move on. So yeah, I would be looking for opportunities to get into something else in that situation.
|
# ? Feb 19, 2019 21:32 |
|
Writing clean terraform is so nice. Now the countdown begins til we abandon it.
|
# ? Feb 20, 2019 21:37 |
|
I don't get the Terraform (mild?) hate - We have 24 AWS accounts with over 200 servers managed entirely in Terraform. Recently our on-premises subnets changed and it took all of 10 minutes to roll it out to everyone's code. The only people I work with who actually hate it are the old infrastructure teams who cry whilst cuddling their old Dell blades.
|
# ? Feb 21, 2019 14:19 |
|
Anyone have a good aws terraform example config for something like that? We're bringing up a new vpc from scratch and want to switch to autogenerating the subnets, sgs, iam roles and such and I don't really want to fall into any obvious traps. Any good whitepapers or blogs on this? Like, some vpcs are nearly permanent you might want a different state store than your apps just to prevent accidents? Stuff like that.
|
# ? Feb 21, 2019 14:27 |
|
|
# ? Jun 6, 2024 06:01 |
|
Small tasks become cumbersome, like making a small tweak is very easy to do in the UI but maybe hard and time consuming in tf. And if you do use tf since it can reconcile global state, it sometimes starts editing things you don’t want. Peering for example is easier to do in the UI across accounts, but then adding those routes in is a nightmare. Kubernetes also makes it semi-useless, because kubernetes duplicates a ton of or in some cases will fight terraform, and cause things to break. It’s also always made me nervous with databases like rds, as it thinks it needs to regenerate things sometimes and if you aren’t paying attention oops! 24 accounts is a lot for only 200 servers? We have like 15 and run thousands(although I am coming to understand that this is because devs have no idea what they are running)
|
# ? Feb 21, 2019 14:28 |