|
Clark Nova posted:With SSDs you can just leave 'em banging around loose inside the case The absolute cheapest option would be whatever PC you have + a Dell PERC H310 (or some other raid card that has or can be flashed to JBOD mode) + a rat's nest of SSDs cat 'o nine tails but with sata cables and ssds. In infrastructure news I've been forced to use GCP for some things at work and it's not bad? Especially app engine, I like app engine a lot, it seems to just "make sense".
|
# ? Apr 20, 2020 00:35 |
|
|
# ? Jun 11, 2024 03:51 |
|
sure, it’s great until google loses interest and shuts it down. that’s scheduled for 2023 iirc
|
# ? Apr 20, 2020 14:14 |
|
Nomnom Cookie posted:are you on aws, because you definitely can get a fixed IP on aws You can, but you shouldn't.
|
# ? May 23, 2020 04:45 |
|
Bored Online posted:in all of yospos this thread most closely aligns with my profession and it js also the thread i understand the least in
|
# ? May 24, 2020 00:05 |
|
Bored Online posted:in all of yospos this thread most closely aligns with my profession and it js also the thread i understand the least in
|
# ? May 24, 2020 13:39 |
|
wilford brimley: kuberneetus
|
# ? May 24, 2020 14:04 |
|
gently caress me. We've got cloud trail forced on all accounts, guard duty, config, inspector, ec2 logs, flow logs, a custom security tool that calls every service via an api, crowdstrike, nessus, mcaffee. Everything is kept forever like some greedy hoarder but im pretty sure nothing is ever read. AWS was 10x more fun before all this crap I swear. Compliance/security/risk folks just can't be trusted.
|
# ? May 24, 2020 22:36 |
|
abigserve posted:cat 'o nine tails but with sata cables and ssds. Literally made one of these with Ethernet cable and phone heads
|
# ? May 24, 2020 23:18 |
|
klosterdev posted:Literally made one of these with Ethernet cable and phone heads sudo su: 2 strokes unannounced reboot: 5 strokes running a wow bot farm on the ML cluster: 25 strokes
|
# ? May 25, 2020 04:12 |
|
klosterdev posted:Literally made one of these with Ethernet cable and phone heads whip a switch with it and post it to onlyfans
|
# ? May 25, 2020 05:17 |
|
klosterdev posted:Literally made one of these with Ethernet cable and phone heads I'm not a hacker, but a cracker. No, the OTHER kind of cracker.
|
# ? May 25, 2020 11:20 |
|
klosterdev posted:Literally made one of these with Ethernet cable and phone heads ethernet bondage? you must be a twisted pair
|
# ? May 25, 2020 17:42 |
|
suffix posted:ethernet bondage? you must be a twisted pair dammit lol
|
# ? May 25, 2020 19:38 |
|
Captain Foo posted:dammit lol
|
# ? May 25, 2020 21:38 |
|
so i dont actually know what service meshes are so i googled it and... is it just people reimplementing packet switching in software over a bunch of VPNs?? whyy does anybody need this if i were building a network of communicating computers i would simply delegate routing and traffic control to the network layer
|
# ? May 27, 2020 02:43 |
|
i have an idea. what if, instead of repeatedly reinventing the operating system layer of our software on top of previous operating system layers, we made one operating system layer, and then stopped. like say ive got some tensorflow code. so instead of running the code in a tensorflow vm on top of a python vm on top of a service mesh on top of a docker container on top of a kubernetes pod on top of a kubelet on top of a linux kernel on top of a VMWare hypervisor on top of a linux kernel i could simply run the code directly ??? food for thought
|
# ? May 27, 2020 02:56 |
|
animist posted:like say ive got some tensorflow code. so instead of running the code in a tensorflow vm on top of a python vm on top of a service mesh on top of a docker container on top of a kubernetes pod on top of a kubelet on top of a linux kernel on top of a VMWare hypervisor on top of a linux kernel i could simply run the code directly ??? you just dont get it op
|
# ? May 27, 2020 04:08 |
|
animist posted:i have an idea. what if, instead of repeatedly reinventing the operating system layer of our software on top of previous operating system layers, we made one operating system layer, and then stopped. you've never worked on anything at scale have you?
|
# ? May 27, 2020 05:20 |
|
animist posted:so i dont actually know what service meshes are so i googled it and... is it just people reimplementing packet switching in software over a bunch of VPNs?? whyy does anybody need this you nailed it. nobody ever needs it. service meshes are a communist plot to degrade our precious adtech microservices’ latency and throughput
|
# ? May 27, 2020 05:28 |
|
animist posted:like say ive got some tensorflow code. so instead of running the code in a tensorflow vm on top of a python vm on top of a service mesh on top of a docker container on top of a kubernetes pod on top of a kubelet on top of a linux kernel on top of a VMWare hypervisor on top of a linux kernel i could simply run the code directly ??? yes but you carry the pager
|
# ? May 27, 2020 05:30 |
|
carry on then posted:you've never worked on anything at scale have you? why are you using puppet, it’s a waste of time writing all those manifests. just back up the server regularly
|
# ? May 27, 2020 05:34 |
|
CMYK BLYAT! posted:yes but you carry the pager
|
# ? May 27, 2020 05:37 |
|
animist posted:so i dont actually know what service meshes are so i googled it and... is it just people reimplementing packet switching in software over a bunch of VPNs?? whyy does anybody need this the point of service meshes is to get you to pay more to the cloud provider of your choice by adding overhead to your resource utilization curiously the people putting the most effort into service meshes happen to also be cloud providers animist posted:i have an idea. what if, instead of repeatedly reinventing the operating system layer of our software on top of previous operating system layers, we made one operating system layer, and then stopped. - the docker container is effectively just the method of passing the filesystem image around as a series of tar files. btw no sane person still uses dockerd in the context of running a cluster, so use containerd or similar instead. dockerd is still acceptable for dev purposes on your local workstation but thats pretty much the only remaining use case for it since in every context it's too much of a flaky piece of poo poo and docker the company is dead - the pod is a kernel cgroup which was created by the kubelet (or strictly the container engine attached to the kubelet). your container is effectively still running as a normal process on the host, it's just in a kernel-managed resource sandbox that was created by the kubelet as part of starting the process so in that setup, the order boils down to: python runtime -> process within cgroup -> kernel -> vmware hypervisor. so pretty much the same as a normal process, just with cgroup rules applied to the process. to illustrate this, if you ran 'ps aux' on the host, you'd see all the container processes in there too
|
# ? May 27, 2020 05:50 |
|
I like the idea that an application has all its services linked together dynamically over the network and it solves a lot of problems but whether it's easier to live with than a properly maintained LB/DNS configuration remains to be seen imo
|
# ? May 27, 2020 10:18 |
|
don't forget the reason for half those vms is some level of insulation from the other garbage code your code needs to potentially run on the same bare metal as and not be affected by
|
# ? May 27, 2020 14:37 |
|
doomisland posted:k8s is a google troll imo Releasing something that mostly works but has horrible hard to diagnose networking and autoscaling bugs if you try and run it at scale is an amazing trojan horse
|
# ? May 27, 2020 15:57 |
|
if k8s on aws provided a LoadBalancer service with support for http2 we’d probably not bother with linkerd
|
# ? May 27, 2020 18:11 |
|
animist posted:so i dont actually know what service meshes are so i googled it and... is it just people reimplementing packet switching in software over a bunch of VPNs?? whyy does anybody need this you don't need to use a service mesh but it could make sense if you want to encrypt internal traffic, do full request tracing or have a whitelist of services that can talk? or you could build that into each service, that also works
|
# ? May 27, 2020 22:20 |
|
all the services in a service mesh are unicast P2P right? if so why not just use ssl for each connection? i'm not seeing the advantages or even the difference in a service mesh compared to just a server that has ports opened and uses ssl to authenticate e: i guess i see the advantage. it allows service-to-service communication to scale horizontally, and also provides snowflake service developers a structured way to integrate new datasources/services or allow others to access their snowflake services. but then you absolutely need a dedicated team to create and manage the service mesh, while also empowering them to enforce standards on access/queries to service mesh. basically you need your good developers to build that instead of building your revenue generating app. ate shit on live tv fucked around with this message at 22:57 on May 27, 2020 |
# ? May 27, 2020 22:45 |
|
carry on then posted:you've never worked on anything at scale have you? what has anything "at scale" ever done for us
|
# ? May 27, 2020 23:12 |
|
suffix posted:you don't need to use a service mesh but it could make sense if you want to encrypt internal traffic, do full request tracing or have a whitelist of services that can talk? most of this can just be handled by the CNI provider, with better performance and without the insane resource overhead of hacks like adding sidecars to every pod or whatever istio is making GBS threads out these days for example networkpolicies provide a generic and compatible-across-clusters path for declaring L3 (host/port) rules for blocking/allowing connections between pods. if your CNI provider (calico or weave, maybe others idk) supports it then the rules are enforced, or if your CNI provider doesn't (e.g. flannel) then they're just ignored. the rules are normally implemented via iptables on the host so they're low-overhead to boot meanwhile if you want something like protocol-level rules (blocking/allowing specific HTTP paths for example) then you could use cilium for that, but i've only gone as far as using host/port networkpolicies with calico so idk how good that is
|
# ? May 27, 2020 23:52 |
|
somedays i'm really glad to just have to deal with ecs it Just Works and isn't trying to be too goddamn clever about anything
|
# ? May 27, 2020 23:53 |
|
ate poo poo on live tv posted:all the services in a service mesh are unicast P2P right? if so why not just use ssl for each connection? i'm not seeing the advantages or even the difference in a service mesh compared to just a server that has ports opened and uses ssl to authenticate A big part of it is trying to build more autonomy into the app (or at least the systems running the app) and the last frontier of that is effectively DNS, LB and network layer security the amount of people who actually need this is probably extremely small tbh but then I'd argue the exact same thing about k8s
|
# ? May 28, 2020 00:04 |
|
psiox posted:somedays i'm really glad to just have to deal with ecs fargate on ecs is what k8s should be imo
|
# ? May 28, 2020 02:38 |
|
Progressive JPEG posted:- the docker container is effectively just the method of passing the filesystem image around as a series of tar files. btw no sane person still uses dockerd in the context of running a cluster, so use containerd or similar instead. dockerd is still acceptable for dev purposes on your local workstation but thats pretty much the only remaining use case for it since in every context it's too much of a flaky piece of poo poo and docker the company is dead ya this is fair. i just always have this nagging sensation of "i'm pretty sure that this could be simpler." maybe that's just entropy beckoning me to cut the power cables carry on then posted:you've never worked on anything at scale have you? nothing beyond a couple hundred users lol. my opinions should not be trusted
|
# ? May 28, 2020 02:42 |
|
animist posted:ya this is fair. i just always have this nagging sensation of "i'm pretty sure that this could be simpler." maybe that's just entropy beckoning me to cut the power cables I mean that’s not a knock on what you do and tbh at that level yeah, most of this is overkill and you’re better off not using it it’s just there does come a point in size where some of this stuff starts to make a lot more sense, but the way this industry works everyone’s gotta use the latest fad whether it makes sense or not
|
# ? May 28, 2020 03:16 |
|
Progressive JPEG posted:the point of service meshes is to get you to pay more to the cloud provider of your choice by adding overhead to your resource utilization the service mesh pattern is fine insofar as it's not unreasonable to offload some things (TLS client auth, basic telemetry spans, whatever) to a generic HTTPS app-level layer. half this poo poo everyone was already doing in some form via reverse proxies and now they're just doing it for forward proxies too: it made sense to shove inbound requests through a common HTTPS layer when that became easily doable, and it makes sense to do it for outbound requests too now that there are tools to enforce it. proxies always add compute and latency overhead. the argument is that the technical overhead is usually much cheaper than the human overhead of making sure all your Java apps and all your Python apps and all your ancient legacy apps do all that poo poo natively, because nobody wants to deal with 5 different languages' ecosystems for adding it, if they even can (legacy service that only understands HTTP basic auth because that's what now defunct contractor used in ObscureLang back in the day cannot, and nobody wants to retrofit it now). you can't quantify the human cost as easily as the (clearly higher than before) technical cost, but that doesn't mean the human cost is therefore $0--it's still there, and will outweigh the technical cost often because human costs are both inherently expensive and more difficult to pare down marketing departments are gonna push it with glossy nonsense because everyone wants a piece of that new market pie, but fundamentally the concept is a sane way to shift human cost into technical cost. lots of management persons are going to sit in a conference talk audience, hear the marketing fluff, and take it at face value that they can just cargo cult install some service mesh solution for instant massive gains without understanding the why or how, but that's lovely leadership in general. people that expect turnkey solutions to their exact problems make everything poo poo because they lack understanding of what they're trying to implement, but that's true no matter what you're doing. people will mcmansion their architectural ineptitude in any paradigm, and nobody will ever provide a technical solution for inept leadership the current offerings aren't great yet because all implementations are new, but the concepts are sound. there isn't much in the way of guard rails and rough edges abound, but there are capable people working on smoothing them and trying to make them easier to use because there's a lot of money in that. doomisland posted:k8s is a google troll imo k8s isn't a troll: google want to provide some sort of lingua franca around managing computing resources in modern environments based on their practical experience running one. everyone else has done so on their particular cloud compute platform in myriad ways, and there are legion sysadmins saying "by god we can continue to use provider-native tools to do the same poo poo", and they're not wrong, but they're not providing a lingua franca, they're providing an AWS or GCP or Azure or Tencent or what have you way of doing things set up to their own preferences. They may well be talented and capable of managing that system, but if you go that route, the onus is on you to provide and maintain the poo poo that works effectively and provides that infrastructure. Google has a specific market interest in k8s because they want to shear off as much AWS-specific poo poo to try and make it easier to migrate off the market leader, but that doesn't mean they've created something that's fundamentally wrong i am a vendor and i do not want to deal with whatever bespoke system your ops people came up with, i want to say "this is how you deploy our app in a cluster based on common standards" same as we have elsewhere. if i ask for a port, i get a port, and i give zero fucks as to exactly how that port is exposed on the internet. it might be an AWS NLB or a Google's NLB equivalent, but gently caress it, it's an addressable network port. kind Service Type LoadBalancer effectively expresses as much. sure, there's plenty of unknown space filled with crazy provider-specific spaghetti, but that's part of the process of figuring out how to do it well. that k8s concept will probably endure, and you can probably fix your bad implementation of the k8s concept as or more easily than you can fix whatever bespoke solution your current senior devops engineer set up before they retired and were replaced with incompetent bodyshop mooks there's gonna be bullshit and confusion for a long time. i am not at all happy that AWS have decided to repurpose Ingress path rules as a means to add their particular HTTP to HTTPS redirect implementation, but they did so in a vacuum of official guidance and i can only fault them so much--they chose a terrible implementation that doesn't work elsewhere and is stupid, but so flows the marketplace of ideas--sometimes you get proposals that suck, but such is the way you determine what the new standard needs to do in a less stupid way going forward tl;dr the "there must be a simpler way" thought isn't wrong, but that simpler way exists only in your head or your team's tribal memory. someone else will have to deal with your simpler way going forward, and you better hope they can work with your simpler way indefinitely or can transition off it easily if need be. you'll probably want to have your voice heard during development of the more common, more complex way regardless, because recruiting people for your bespoke ivory tower stack is gonna be hard
|
# ? May 28, 2020 05:58 |
|
CMYK BLYAT! posted:k8s isn't a troll: google want to provide some sort of lingua franca around managing computing resources in modern environments based on their practical experience running one. Sure, except k8s is all the stuff they weren't allowed to put in borg and borg actually works for large workloads without the HPA making GBS threads itself 12 times a day because you have too many containers. We've released this thing we obviously don't use ourselves, once you've finished burning eng resource trying to get it work why don't you pay us to use our closed source actually working platform. "Service mesh" carry on then posted:you've never worked on anything at scale have you? To add to this, itsio is over complicated but solves two big internet company problems The site is based on a billion micro services written in 5 languages and 20 different frameworks. How do I instrument a user journey consistently so we can actually debug when things go wrong without having to write and maintain sdks for all of these ? How do you get 100 teams to spend the time doing this when product are shouting at them about the deadline for the new fart app ? Your site gets enough traffic that it has to be distributed across multiple DCs / regions. How do you re-route traffic when a dependancy fails without failing the whole region. Both are achievable without a mesh, but the mesh makes it much easier for the devs to do it in a consistent way across your org.
|
# ? May 28, 2020 08:29 |
|
the soviets just used consul
|
# ? May 28, 2020 08:58 |
|
|
# ? Jun 11, 2024 03:51 |
|
CMYK BLYAT! posted:the service mesh pattern is fine insofar as it's not unreasonable to offload some things (TLS client auth, basic telemetry spans, whatever) to a generic HTTPS app-level layer. half this poo poo everyone was already doing in some form via reverse proxies and now they're just doing it for forward proxies too: it made sense to shove inbound requests through a common HTTPS layer when that became easily doable, and it makes sense to do it for outbound requests too now that there are tools to enforce it. i contemplate your ops wisdom and am enlightened. question: if the problem is that ObscureLang doesn't support authentication, tracing, etc, how does a service mesh help? it seems to me that those things interact with actual functionality in complex ways. so you either need to wire the service mesh in at the ObscureLang source code layer, or you'd need some hella complicated request inspection code at the service mesh layer. like, how do you trace a request through a language that doesn't support tracing? do you just correlate incoming and outgoing requests by time received or something?
|
# ? May 28, 2020 18:21 |