Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
I've been told ansible and molecule are great but the past few days with it have been miserable. This poo poo sucks and the documentation is awful.

12 rats, you lied to me

Adbot
ADBOT LOVES YOU

drunk mutt
Jul 5, 2011

I just think they're neat

Blinkz0rz posted:

I've been told ansible and molecule are great but the past few days with it have been miserable. This poo poo sucks and the documentation is awful.

12 rats, you lied to me

Wait, someone advocated for molecule ITT? lol

12 rats tied together
Sep 7, 2006

Molecule is dogshit, sorry. No offense to the maintainers (who are volunteers AFAIK) but ansible tests itself every time it runs and doesn't need to be unit tested. Using molecule IME makes stuff way more confusing for actually 0 benefit, not even just "maybe a little benefit".

If you have specific complaints I'm happy to offer advice (probably: write a plugin. write a module.)

xzzy
Mar 5, 2009

Ansible is kinda poo poo too though.

Not because of any specific flaw with the software.. just that config management is a painful job. I guess my biggest nitpick with it is setting up inventories.

Docjowles
Apr 9, 2009

This is reminding me of a former boss who would insist we write a bunch of tests for all of our chef cookbooks. There were certain functional tests I didn’t mind, like does the cookbook that sets up this app result in the health check passing at the end. But all the unit tests boiled down to like “does the built in package resource do its one job of installing the package you asked for”. Chef itself has plenty of tests, we don’t need to spend our limited time on double validating the most fundamental poo poo

Hadlock
Nov 9, 2004

xzzy posted:

Ansible is kinda poo poo too though.

Not because of any specific flaw with the software.. just that

:hellyeah:

gently caress ansible

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Would you guys rather be working with Puppet, Salt, or Chef? I've never used Salt but I'd take Ansible over Puppet or Chef hands down.

12 rats tied together
Sep 7, 2006

It's got a bunch of flaws but every time I try something else I run into a problem thats either "put this in ansible" or "fork the project" or "do a ton of work recreating some aspect of ansible".
  • Pyinfra is cool but doesn't support dynamically modifying the task list during a run, which leaves me putting pyinfra runs into ansible.
  • Terraform is cool but it can't apply 2 states in order, which leaves me putting the terraform apply into ansible.
  • dsh doesn't have: Strategies. Batch size/throttles. Groups and group_by. Notifications and handlers. etc
  • Invoke / similar "run command over ssh" libraries have the same problem dsh does. If it were that easy I wouldn't have a job in the first place, so at work I'm doing hard stuff with ansible.
And then you've got awx for your enterprise cringe, ansible-shell and adhoc for one-off tasks, and rulebook for your event-sourced tasks, it's really all there. You just gotta like, pin ansible to 2.11 or so, and write plugins every time you get mad about something, which is always.

Hadlock
Nov 9, 2004

To control what? Physical servers?

drunk mutt
Jul 5, 2011

I just think they're neat
Comparing Ansible to Terraform is really not something one should consider. You wind up with a poo poo ton of null resources, or half assed managed providers and state management meant for long standing resources not configuration management; Hell, hashicorp even admits allowing the community to think this way back in like 0.7 was a mistake and worked with RedHat to create the Terraform Ansible plugin so they are easier to use in conjunction.

ETA: Ansible itself isn't really all that great, but the task graphing that it does and the idempotent out of the box is a good selling point. Basically the only thing that I hear people complain about is inventory management, beyond that it's just stuff like "where the gently caress does this variable value come from" (which is a design problem not a tooling problem).

drunk mutt fucked around with this message at 00:36 on Jan 25, 2024

Hadlock
Nov 9, 2004

Best way to flag unencrypted sops files and prevent them from getting pushed to GitHub

Docjowles
Apr 9, 2009

Twerk from Home posted:

Would you guys rather be working with Puppet, Salt, or Chef? I've never used Salt but I'd take Ansible over Puppet or Chef hands down.

I haven't had to use any sort of traditional config management in years thank god, but having used all of Puppet/Chef/Salt in production I really enjoyed Salt. To the point where I used to host my city's Salt user group and had the founder come out and give a talk, lmao. I think I am like the only person on earth who strongly preferred it, though. Part of that was admittedly being much more proficient in Python so it was easier for me to extend it or send pull requests upstream or whatever vs Ruby. In the end they are all more or less doing the same poo poo in the same way with some different window dressing. Salt was ahead of the curve in using YAML for its DSL before k8s even existed, I guess.

This is also making me feel extremely old. I entered tech before config management was a thing, then for a minute it was the ONLY THING you needed on your resume, and now it's basically dead because containers and Lambda. Whenever I job search again I'm not sure if I should even bother putting 5 years of Chef on my resume, it feels about as relevant as my time maintaining Windows 2003 servers.

12 rats tied together
Sep 7, 2006

I like the idea of Salt mostly because of Reactor, which exists now as Rulebook, but sometimes it's nice to have an agent. If I was going to take a config manager janitor job today, I would want it to be Salt.

drunk mutt posted:

Comparing Ansible to Terraform is really not something one should consider. You wind up with a poo poo ton of null resources, or half assed managed providers and state management meant for long standing resources not configuration management; Hell, hashicorp even admits allowing the community to think this way back in like 0.7 was a mistake and worked with RedHat to create the Terraform Ansible plugin so they are easier to use in conjunction.

ETA: Ansible itself isn't really all that great, but the task graphing that it does and the idempotent out of the box is a good selling point. Basically the only thing that I hear people complain about is inventory management, beyond that it's just stuff like "where the gently caress does this variable value come from" (which is a design problem not a tooling problem).

I was active in the early-ish days of the jetporch discord and the only useful thing I did in there was tell megaman avatar dude to include variable provenance in jetporch's version of `ansible-inventory`. It's pretty dumb that you can't just run an inventory and include a flag that says hey, tell me where all this crap came from, ideally with a verbose override chain for me to look at. It's something that would have been really easy to include in ansible 10 years ago but would be really hard today, which is a shame, because yeah inventory and variables is the main source of bad.

At my current role our inventory files are a single line of yaml that passes a company specific id into a python function that talks to all of our internal stuff in the right way. We don't actually own any variables at all and we want to keep not owning them.

xzzy
Mar 5, 2009

I think all config management tools suck but if I had to pick one I'd stick with puppet just because I got 10+ years experience living with that pain. At least it's a dull pain now.

Part of it is also because prior to puppet we had cfengine, which is absolutely the worst config management tool out there.

The Fool
Oct 16, 2003


there's a legacy thing that I maintain that was built as pipeline steps calling ansible playbooks and it's not actually the worst

Docjowles
Apr 9, 2009

xzzy posted:

I think all config management tools suck but if I had to pick one I'd stick with puppet just because I got 10+ years experience living with that pain. At least it's a dull pain now.

Part of it is also because prior to puppet we had cfengine, which is absolutely the worst config management tool out there.

My Salt-heavy job I was tasked with replacing cfengine with Salt. Which definitely colors my perception since literally anything was better than cfengine lol

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
I strongly dislike salt and managing yet another agent on machines. Thankfully we don’t have a lot of unmanaged VMs these days, but ew and gross respectively. Ansible is agentless and just needs SSH access, and for that it’s wonderful.

I can see where salt reactor has its perks, but frankly I’m just not that interested in spending a lot of time on config/instance management. It’s not exactly a fascinating problem set, all the options suck to one degree or another, and I’d rather work on newer and more interesting technologies that offer better growth potential and career opportunities.

Which, as an aside, is kind of weird. It’s way harder to manage properly and efficiently state on VMs then it is to deploy and operate applications in a managed kubernetes cluster! It’s baffling the market doesn’t reward the skillset as highly when it seems essential to anything to do with datacenter ops and IaaS at scale. at a certain point you can’t abstract away managing bare metal again! Genuinely curious what the BigTech folks do for this problem.

The Iron Rose fucked around with this message at 05:13 on Jan 25, 2024

12 rats tied together
Sep 7, 2006

I consider myself well-compensated. I think the key is that you need to find a place that runs stuff on metal because they need to, not because they can't figure out what else to do.

Hadlock
Nov 9, 2004

The Iron Rose posted:

It’s baffling the market doesn’t reward the skillset as highly when it seems essential to anything to do with datacenter ops and IaaS at scale.

Why do you say this

Docjowles
Apr 9, 2009

12 rats tied together posted:

I consider myself well-compensated. I think the key is that you need to find a place that runs stuff on metal because they need to, not because they can't figure out what else to do.

:yeah:

You can probably extrapolate this to anywhere where the architecture is intentional, and they can explain the intention in a way that makes sense to you. If you are a generic small to medium SaaS running in a colo somewhere… why? If you are running a business that trips over one of the cloud landmines like internet egress pricing… also why.

I’ve been doing AWS for a while now because I find it very interesting (and lucrative). But I wouldn’t hate going back to the data center world on a large enough scale. I have loving zero interest in going back to managing like < 1000 pet servers. Working on tooling to manage 10k servers because it’s the right choice for the business? Now we’re talking.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

Hadlock posted:

Why do you say this


Regarding the comparative difficulties:
I haven’t done any real on prem work in my career, and I’m also at the point in my career where I’m interested in more than the abstractions cloud providers give me. I’m also leery of how much money we’re shoveling away every month and I do think that at scale there are legitimate benefits to supporting large on prem fleets of infrastructure.

Managing infrastructure at that scale is an interesting problem to solve. the big three obviously do so at extraordinary scale. I think it’s a much harder problem set than much of what gets solved on in the big wide wonderful world of devops because you have to build many of the abstractions and services you get in AWS et al yourself.

regarding the compensation:

It might just be the lovely public LinkedIn jobs I see, or businesses that don’t operate at huge scale, but I don’t see many job posting’s emphasizing on premises environments and those that do seem to pay vastly less than what I make doing cloud bullshit.

Docjowles
Apr 9, 2009

I would never recommend intentionally getting out of cloud bullshit if you like it and are doing well

Hadlock
Nov 9, 2004

The only on prem stuff I've seen is local backups of company laptops and whatever else the IT guys dream up a use for the 30 bay Synology unit they bought ten years ago

Probably the reason the on prem guys pay such garbage is they aren't willing to splash out for infrastructure improvements.

On prem seems like such a wild concept these days, spend a ton of time on growth forecasting, assembly, burn in etc etc and then you're on the hook for finding and solving weird hardware problems. With an ec2 instance if the hardware it's running on rolls over, it just "reboots" elsewhere in the cloud

I thought this was interesting

https://thenewstack.io/merchants-of-complexity-why-37signals-abandoned-the-cloud/

The headline is misleading, they're projecting going from 22.4 million a year in the cloud down to 21.8 million a year on prem. So they're saving about a million a year, but picking up a tremendous amount of head count, plus paying taxes on those business assets etc

Me personally I'd rather hire a jr and a sr cloud engineer and pay the credit card bill each month

SeaborneClink
Aug 27, 2010

MAWP... MAWP!
You can also amortize your sunk capital costs in a data center, can't really do that in the cloud :ssh:

Hadlock
Nov 9, 2004

Armortizing $600k over 27 years barely covers the bean counters salary to do the calculation

Falcon2001
Oct 10, 2004

Eat your hamburgers, Apollo.
Pillbug

Docjowles posted:

Working on tooling to manage 10k servers because it’s the right choice for the business? Now we’re talking.

This is me but it's more like 100k I think.

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
if you have a weird hardware problem on-prem, you don’t have to figure it out, your vendor does.

xzzy
Mar 5, 2009

It's all IT, cloud vs metal just exchange some benefits and annoyances. The unifying feature is users screwing stuff up and that's the real pain that no one has fixed.

I personally prefer having on prem servers but that's probably just because I've done it that way most of my career. Hardware support is pretty easy these days.. there's not a lot of tracking down stupid incompatibilities or driver bugs anymore.

The biggest aggravation is OS and security updates, we're rushing to clear out RHEL 7 before it's EOL and it is always a huge headache to get users to migrate and test their poo poo.

tortilla_chip
Jun 13, 2007

k-partite

The Iron Rose posted:

Genuinely curious what the BigTech folks do for this problem.

There are 8 digit Chef deployments out there.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Hadlock posted:

The only on prem stuff I've seen is local backups of company laptops and whatever else the IT guys dream up a use for the 30 bay Synology unit they bought ten years ago

Probably the reason the on prem guys pay such garbage is they aren't willing to splash out for infrastructure improvements.

On prem seems like such a wild concept these days, spend a ton of time on growth forecasting, assembly, burn in etc etc and then you're on the hook for finding and solving weird hardware problems. With an ec2 instance if the hardware it's running on rolls over, it just "reboots" elsewhere in the cloud

I thought this was interesting

https://thenewstack.io/merchants-of-complexity-why-37signals-abandoned-the-cloud/

The headline is misleading, they're projecting going from 22.4 million a year in the cloud down to 21.8 million a year on prem. So they're saving about a million a year, but picking up a tremendous amount of head count, plus paying taxes on those business assets etc

Me personally I'd rather hire a jr and a sr cloud engineer and pay the credit card bill each month

DHH making a dumb decision to try to flex for clout? You don't say!

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
It's not a decision I would make lightly but he isn't wrong. Cloud systems used to have pretty clear and obvious constraints, and these constraints shaped the architectures of the systems around them: either something is built to work well in the cloud, or it isn't. One at a time, cloud providers have started to ship new services and features that say, "Bolt on this new product and you lose this constraint!" Because Developers Have Total Autonomy now, the resulting systems design around it is an impenetrable clusterfuck wasteland of totally arbitrary decisions that nobody understands the implications of, least of all the people who made them, and it's political suicide to actually try to pull any of it together into some kind of half-lucid governance.

Any time you hear leaders of a modern tech org talk about "reducing spend", what they're actually talking about is consolidating systems, because this is the literal only remaining lever to try and align decisions about systems architecture that doesn't get so many people yelling that you lose the ability to do the rest of your job.

The cloud is to agility-minded business execs what screen time is to children: a great enabler and a potentially extremely debilitating tool.

Vulture Culture fucked around with this message at 18:45 on Jan 25, 2024

12 rats tied together
Sep 7, 2006

yeah DHH was right about this one but last time I posted that online everybody brought me printouts of the massive amount of DHH Ls that exist, so it's not very exciting to pinch hit for him.

abraham linksys
Sep 6, 2010

:darksouls:
idk man there's such a gradient here (and even DHH would admit that)

you can have hosting in the cloud without using a bunch of bespoke cloud products. no tiny startup or small bootstrapped company should be starting with a significant investment into their own hardware in a colo or some poo poo. you want a VPS on someone else's server. run your services in docker containers with no knowledge of the cloud they're hosted in. get some managed database hosting, you probably don't want to run your own database servers early on.

once you get larger, reevaluate as you grow. are you growing in a stable, predictable way? do you really have confidence your self-hosted services would have less downtime than the cloud providers available? can you run the numbers and find a point at which hosting things on your own is cheaper than your current hosting bill? maybe consider getting your own hardware.

and, yknow, you're still going to use "the cloud," because it's the internet. hey.com is still served behind cloudflare, they're not out there running their own CDN. if you're not an email service, you're probably not going to run your own email servers (good luck on your deliverability!), you'll use SES or something.

and there's all kinds of additional things you'll want that you could host on-prem, or you could run yourself on a cloud server, or you could use a fully-managed cloud provider: your database, your CI runners, your docker registry, etc. all individual tradeoffs. we're about to move from jenkins we run in EC2 to github actions because we simply do not have the resources required to maintain our jenkins boxes, and the cost tradeoff is less than hiring more engineers.

12 rats tied together
Sep 7, 2006

its hard to respond to that because it wanders a bunch, so I'll just say that I disagree with all of it. it is cheaper to run stuff on prem, you can run a database in a colo (why would you want to pay egress traffic per API call?), startup infrastructures are extremely simple and can be provided as part of a managed services deal by the colo provider because it's basically just a network drop to a cage.

you might need to start buying cloud infrastructure when you go from 100 customers to 1,000, which is what the cloud is good at anyway (buy the baseline, rent the spike), or if your infrastructure is so complicated that you can't run it off of 1 pair of core switches. plenty of businesses never get more complicated than this, I've worked at a few, and we get something like 8x cheaper infrastructure running out of level 3 chicago.

CDN doesn't count because essentially nobody runs their own CDN, it's not even on the table. CI runs just fine on-prem but there's 0 chance that startup founders have experience doing this.

in general I'd suggest that what startups do is more of an artifact of who creates startups, and doesn't have much to do with actual ROI

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
What's the pattern for handling provider versioning within Terraform modules?

Let's say I have module A that is consumed by base. base says it needs provider version ~> 3.0. A says it needs provider version >= 3.0.0. They both have lock files that point to the same version, 3.5.

I upgrade base from 3.5 to 3.8. base immediately breaks because Module A is on 3.5. base can't simultaneously use 3.5 and 3.8. Is the trick to simply exclude lock files from version control for modules? That seems very wrong.

[edit] let's say that module A is being sourced out of a Git repo reference to a tag, not anything fancy like Terraform cloud.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

New Yorp New Yorp posted:

What's the pattern for handling provider versioning within Terraform modules?

Let's say I have module A that is consumed by base. base says it needs provider version ~> 3.0. A says it needs provider version >= 3.0.0. They both have lock files that point to the same version, 3.5.

I upgrade base from 3.5 to 3.8. base immediately breaks because Module A is on 3.5. base can't simultaneously use 3.5 and 3.8. Is the trick to simply exclude lock files from version control for modules? That seems very wrong.

[edit] let's say that module A is being sourced out of a Git repo reference to a tag, not anything fancy like Terraform cloud.
I believe the lock files for everything except the root module are irrelevant and are ignored, and only the constraints are considered when evaluating provider versions against modules

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Vulture Culture posted:

I believe the lock files for everything except the root module are irrelevant and are ignored, and only the constraints are considered when evaluating provider versions against modules

That's not the behavior I'm encountering, although it would make sense. I'll come up with a coherent example later.

The Fool
Oct 16, 2003


Even if lock files are parsed outside of the root module, you shouldn't use them anywhere else.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

The Fool posted:

Even if lock files are parsed outside of the root module, you shouldn't use them anywhere else.

So lock files should only be in root modules? Then how do you handle

root module -> something that was originally also a root module but now is being used as part of a larger initiative but is also still a root module in some cases -> true, non-root module?

[edit] For what it's worth, I'm trying to reign in people who are attempting to over-module things but I'm not having a ton of success.

New Yorp New Yorp fucked around with this message at 00:15 on Jan 26, 2024

Adbot
ADBOT LOVES YOU

The Fool
Oct 16, 2003


you don't

module management is poo poo and having more than a single layer of modules just increases that poo poo exponentially

the "right way" would be to break up your old root "module" into one or more consumable modules and never nest them

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply