|
I feel weird asking kubernetes questions in here, might be a better fit for the linux thread? sorry. If I need to schedule a pod on a node that is expected to have a specific external device mounted, in my mind this is a job labelling the node with hasdevice=true and podspec nodeSelector hasdevice: true? Yeah, super dumb to tie yourself to a single node, but I want to set up a super simple rsync pod to back up my synology to local drive. Without standing up additional hardware, the only thing I really have available is an external USB drive and my little USFF kubernetes cluster which is running Talos so I can't (easily) run rsync on the underlying node OS itself. .. but I guess I get to fiddle with my kube skillset a little which is always nice for a guy who is never hands-on keyboard.
|
# ? Aug 26, 2022 13:25 |
|
|
# ? Jun 8, 2024 08:21 |
|
some kinda jackal posted:If I need to schedule a pod on a node that is expected to have a specific external device mounted, in my mind this is a job labelling the node with hasdevice=true and podspec nodeSelector hasdevice: true? Yeah, this is the way to do it. It's not highly-available but that's not worth worrying about for your home stuff. It's how I make sure Home Assistant runs on the node that has the ZWave stick attached.
|
# ? Aug 26, 2022 16:31 |
|
some kinda jackal posted:I feel weird asking kubernetes questions in here, might be a better fit for the linux thread? sorry. This is what you want to actually do. But if you want to be hardcore about it, you can look into custom hardware resources being visible to the scheduler and write your own plugin https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/
|
# ? Aug 26, 2022 21:48 |
|
Methanar posted:I have just spent the last 90 minutes conclusively proving that something should not work. And yet it does. thread title
|
# ? Aug 26, 2022 22:07 |
|
After a whirlwind few months of applying random helm charts and resources from yaml lying around my various laptops onto my cluster and then promptly forgetting what is installed or where the original sources are, I’m kind of ready to see if gitops solves this for me a little. I don’t think I’m looking to buy into a philosophy or anything, at this point I think I just need something like Argo as a single-source-of-truth across all my little lab clusters given how frequently I blow everything up. Methanar posted:This is what you want to actually do. O hecc. I’m not going to lie, I forgot that my synology has a USB port so I can just like… plug the drive in there directly… But I’mma still write the rsync pod setup because it sounds like a good way to waste a few hours Not that plugin thing though. That sounds like madness
|
# ? Aug 26, 2022 23:53 |
|
some kinda jackal posted:After a whirlwind few months of applying random helm charts and resources from yaml lying around my various laptops onto my cluster and then promptly forgetting what is installed or where the original sources are, I’m kind of ready to see if gitops solves this for me a little. It sounds like you should just start with git as a single source of truth and not worry about deploying automatically. You can still use kubectl on whichever machine you’re sitting at - just be sure to pull first and commit and push after. Later you can add a pipeline to do it for you.
|
# ? Aug 27, 2022 00:33 |
|
I don’t disagree, but I’ve definitely got to get into a habit of maintaining my repos better. I say my stuff is all over the place, but in the end everything IS check into a repo. But it’s all a collection of loose manifests named temp.yaml and asdf.yaml in a tmp/tmp/testing/tmp directory. Now I’m looking at a big “homelab” git repo that has like fourteen directories I need to clean up, so I think I was looking to start fresh with a new mindset and probably argo or flux were just going to be a convenient vehicle to say “ah ok i’m gonna do it right this time”. Because I think deep down in my imagination I was picturing finding that one /really good/ unicorn tutorial on youtube that would explain how to do everything sanely starting with how to properly structure your config-as-code/gitops stuff, and then how to apply it in a fun new-world way (Sorry gang, you have to remember I’m a glorified policy paper pusher that only does this in his spare time. I wish I had the time to really dig into this fun stuff on the daily ) some kinda jackal fucked around with this message at 00:52 on Aug 27, 2022 |
# ? Aug 27, 2022 00:48 |
|
I have kind of a newbie AWS question. We have a Rails API running on Kubernetes that runs a suite of b2b products that sell. Because of the nature of the business we never have to deal with insane amounts of traffic. However, we have a newer offering that exposes some functionality to the public, and that's going to wind up with potentially a lot of traffic now hitting one or two of our endpoints. FWIW these are just simple GET endpoints that load some data from the DB and display it. In order to not bring our setup to its knees I'm thinking I want a caching layer in front of those endpoints, particularly because I don't care if the data is stale for a few minutes (unlike a lot of the other data in our system.) What I'm thinking of doing is the following: 1. Create a cloudfront distribution 2. Point it at api.mycompany.com (or maybe just api.mycompany.com/route1 and api.mycompany.com/route2?) 3. Configure cache headers to stick around for however long makes sense 4. Create public-facing-api.mycompany.com and point it at the cloudfront distro 5. Set up the public facing front-end app to request data from public-facing-api.mycompany.com The other setup I had in mind was to take these two endpoints and rewrite them as lambda functions, and then put API Gateway in front of those and do a similar thing with public-facing-api.mycompany.com pointing at the API Gateway. If API Gateway supports it I suppose I could also have it sitting in front of my Rails API instead of creating new lamdba functions. We normally deal with pretty predictable amounts of traffic so I've never really had to handle the possibility of a crazy spike like this before.
|
# ? Aug 29, 2022 19:38 |
|
Sounds like it’s worth it to rewrite them as Lambdas and put them behind API gateway.
|
# ? Aug 29, 2022 20:52 |
|
LochNessMonster posted:Sounds like it’s worth it to rewrite them as Lambdas and put them behind API gateway. Is there much value in rewriting them as Lambdas if the API Gateway is going to cache responses anyway? My understanding is if 10,000 people descend on an endpoint then the first person will hit our actual API and then the other 9999 get the cached response until the TTL runs out.
|
# ? Aug 29, 2022 21:51 |
|
prom candy posted:Is there much value in rewriting them as Lambdas if the API Gateway is going to cache responses anyway? My understanding is if 10,000 people descend on an endpoint then the first person will hit our actual API and then the other 9999 get the cached response until the TTL runs out. APIGateway cache is expensive iirc
|
# ? Aug 29, 2022 21:58 |
|
Mostly because it’s easier to manage and no need to look at scaling. Also likely cheaper but I guess you’ll keep k8s for the other apps so that’s a bit of a moot point. If it’s really a simple API that does more or less static DB queries then why not keep the solution as simple as possible. Orchestration is probably a lot easier as well. Going serverless means less work/maintenance for you.
|
# ? Aug 29, 2022 22:03 |
|
AWS published this new feature recently, i posted it to the raspberry pi thread, I think because I thought it might be useful to dump heavy compute to the cloud or something from an IoT box https://aws.amazon.com/blogs/aws/announcing-aws-lambda-function-urls-built-in-https-endpoints-for-single-function-microservices/ Not sure what the pricing difference is between the two but function urls look to be lighter-weight/cheaper so 30-50% cheaper? wild-rear end guess aws posted:When to use Function URLs vs. Amazon API Gateway
|
# ? Aug 29, 2022 22:09 |
|
Love Stole the Day posted:APIGateway cache is expensive iirc Time is more a factor than cost right now but that's still good to know LochNessMonster posted:Mostly because it’s easier to manage and no need to look at scaling. Also likely cheaper but I guess you’ll keep k8s for the other apps so that’s a bit of a moot point. Oh so basically put the endpoints on Lambda and don't cache them? That still results in a lot of DB reads though right? I guess MySQL/RDS has its own caching layer for those queries. And yeah we're stuck with the Rails/k8s setup unfortunately. I posted in the Working in Development thread a couple weeks ago about a reasonable path forward away from our Rails monolith and sadly the answer is "there isn't one that makes any business sense." Hadlock posted:AWS published this new feature recently, i posted it to the raspberry pi thread, I think because I thought it might be useful to dump heavy compute to the cloud or something from an IoT box This seems pretty applicable except I'm not sure about not having custom domains. It seems a bit weird to expose to anyone who wants to look in the network tab that we're making direct calls to a specific AWS service. Maybe that's a non-issue that I just made up in my head though.
|
# ? Aug 29, 2022 22:41 |
|
It's possible that in 2022 somebody cares about which cloud hosted provider you're on, but I can't think of a use case
|
# ? Aug 30, 2022 01:51 |
|
Hadlock posted:It's possible that in 2022 somebody cares about which cloud hosted provider you're on, but I can't think of a use case Lost me here.
|
# ? Aug 30, 2022 02:17 |
|
Me: try amazon function urls Him: function urls have AWS in the URL string, if that matters Me: it almost certainly does not matter You can't assign CNAME to function urls afaik so they all end up looking like this (example urlf rom AWS) https://4iykoi7jk2kp5hhd5irhbdprn40yxest.lambda-url.us-west-2.on.aws/ Visiting almost any website has calls out to all sorts of trackers, that seems like it would blend in with the other crap, but maybe that's just me
|
# ? Aug 30, 2022 03:46 |
|
azure functions can be referenced by cname, I would be very surprised if aws was different
|
# ? Aug 30, 2022 03:50 |
|
Yeah looks like there is a tutorial... Here? https://www.serverless.com/plugins/serverless-aws-function-url-custom-domain I've never implemented this so just guessing based on his response about the API URL being poorly formed for his use
|
# ? Aug 30, 2022 03:58 |
|
Even if you can cname it, anyone who cares can see what the name really resolves to in 2 seconds. You aren’t getting any infosec wins by doing that. Maybe it’s a friendlier user experience, and it lets you change where the service lives just by updating dns. But it doesn’t do anything meaningful to hide the fact that you’re using AWS.
|
# ? Aug 30, 2022 04:04 |
|
Security through obscurity, mang, ride the wave
|
# ? Aug 30, 2022 04:06 |
|
Docjowles posted:Even if you can cname it, anyone who cares can see what the name really resolves to in 2 seconds. You aren’t getting any infosec wins by doing that. Maybe it’s a friendlier user experience, and it lets you change where the service lives just by updating dns. But it doesn’t do anything meaningful to hide the fact that you’re using AWS. maybe im missing something, but... so?
|
# ? Aug 30, 2022 04:17 |
|
The Fool posted:so? I think that is basically the takeaway lol. OP said they felt weird exposing that their app runs in AWS. Was just replying to that, if someone is motivated and actually cares to know where your poo poo runs they can figure it out regardless. There’s other reasons to use a custom domain but security isn’t really one of them.
|
# ? Aug 30, 2022 04:29 |
|
Thanks for all the info/discussion, glad to know using the AWS url wouldn't matter if I went that route. I ended up putting api gateway with caching in front of the two existing endpoints. I think this is probably the fastest solution and is only gonna add like $30-40 to our monthly spend as far as I can tell. The front end of this is a nextjs SSR app also running in our k8s cluster but I'm expecting that to be able to handle waaaay more requests than the big chunky rails app.
|
# ? Aug 30, 2022 05:49 |
|
people can tell you’re on aws just based on IP
|
# ? Aug 30, 2022 07:23 |
|
people can tell you're on aws by just assuming you're on aws and they'd be right 60% of the time
|
# ? Aug 30, 2022 07:52 |
|
Hadlock posted:It's possible that in 2022 somebody cares about which cloud hosted provider you're on, but I can't think of
|
# ? Aug 30, 2022 07:54 |
|
Walmart, somewhat famously.
|
# ? Aug 30, 2022 15:21 |
Retail as a whole tbqh
|
|
# ? Aug 30, 2022 15:25 |
|
Hadlock posted:AWS published this new feature recently, i posted it to the raspberry pi thread, I think because I thought it might be useful to dump heavy compute to the cloud or something from an IoT box Well jeez I'm glad I posted in this thread because unrelated to the problem I showed up with we're spending like $500/mo on API gateway for a lambda webhook that just gets absolutely pounded. It looks like moving this over to a function URL can save us a fortune. Thanks again for posting it.
|
# ? Aug 30, 2022 21:27 |
|
Yeah I think the cost pricing for a million hits on that thing in the example they give is $5 a month which is uh, substantial cost savings You can PayPal me my cut of your cost savings bonus to my PMs Hadlock fucked around with this message at 22:46 on Aug 30, 2022 |
# ? Aug 30, 2022 22:35 |
|
People that pay for bandwidth out of your SaaS product and have significant bandwidth needs care what provider you’re hosted under, oh yes indeed. Nobody wants to pay an extra $10k / month for worse latency to their other software unless you’re asinine and flush with cash like DoD.
|
# ? Aug 30, 2022 22:57 |
|
Current mood: Need to do a bulk update for a thing in an internal service. The internal service has a regression to where it does not actually have any sort of built in exposed mechanism for bulk updates, unlike the thing it replaced. internal service does not have API docs. Clicking around in a web UI with chrome dev tools open and just seeing what happens when you click things is the apidocs. Spent the day writing a 200 line script to do a bulk update for some internal service my own way. Which had to be multithreaded because going from my laptop, over the vpn, over to aws and back again was way too slow to do serially for 10000 objects. Find a ton of partially related, pre-existing schema errors and just completely ignore them. Start looking around to see if any local Pizza huts are hiring. Methanar fucked around with this message at 00:21 on Aug 31, 2022 |
# ? Aug 31, 2022 00:17 |
|
Methanar posted:Current mood: I feel this, basically had to do the same for the cost tracking application I was bitching about in the AWS thread. It ostensibly has an API but the documentation is just dickbutt.gif and I had to trial and error absolutely everything including authentication. Which is especially infuriating for a paid product. Do your customers normally onboard hundreds of AWS accounts by clicking through the wizard over and over like it’s Windows NT 4? I felt like I was the first person to ever ask if this could be automated. The weird part was the API was actually very good and complete. You’d just never know it from the docs, which were wrong where they existed at all.
|
# ? Aug 31, 2022 04:26 |
|
That smells to me like deliberate obfuscation for the intention of selling proserv.
|
# ? Aug 31, 2022 17:55 |
|
Hadlock posted:Yeah I think the cost pricing for a million hits on that thing in the example they give is $5 a month which is uh, substantial cost savings If it was my company I would! My cut will be $0 and a pat on the back.
|
# ? Aug 31, 2022 21:00 |
|
prom candy posted:If it was my company I would! My cut will be $0 and a pat on the back. Lucky you, most companies would complain why you let this situation exist in the first place. You could’ve saved so much money if you’d have prevented it or found out about it sooner. By being this late you literally cost your management thousands of dollars in missed bonusses.
|
# ? Sep 1, 2022 10:36 |
|
LochNessMonster posted:Lucky you, most companies would complain why you let this situation exist in the first place. Thankfully we're too small to have any layers of management, and I inherited this devops setup so I'm innocent!
|
# ? Sep 1, 2022 14:31 |
|
Numa: Do I care? We're deploying a new very large hardware sku in the datacenter for use by Kubernetes. Do I need to worry about numa awareness or anything like that? Looks like Kubernetes does have a newer feature specifically designed around accounting for setting up numa affinity. Does anybody have experience with this particular new feature or numa concerns/gotchas in general? https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/
|
# ? Sep 1, 2022 18:25 |
|
|
# ? Jun 8, 2024 08:21 |
|
if you have a diverse/mixed workload on the hardware (say dozens+ of containers) then no you'll probably never notice and the defaults will probably be fine. if you run like one big ml pipeline and just happen to use kube to put the jobs on the hardware then yea you'll have to dig in.
|
# ? Sep 2, 2022 03:18 |