|
Ploft-shell crab posted:any of y’all tenants trying to get you to run a god dang “service mesh”? idk what real problems these things are trying to solve, I think they’re just inventing stuff for themselves to do load balancing and canaries. basically, threading a single end user request back through a load balancer 10 times because of microservices is obviously horrible. service mesh isn’t obviously horrible
|
# ? Feb 12, 2020 20:02 |
|
|
# ? May 19, 2024 22:01 |
|
istio is for when kube doesn't have enough moving parts for your taste
|
# ? Feb 12, 2020 20:08 |
|
istio specifically is google trying to regain control over (some aspect of) k8s regardless of whether the functionality even makes sense at that layer also the istio code is a mass of spaghetti. for example all external-dns 32 bit builds panic after running several minutes because they import some istio client library, which launches static background timer threads when the client module itself is imported (as opposed to when it’s actually inited/used), which then eventually crash after several minutes due to unaligned atomics in some istio base library Progressive JPEG fucked around with this message at 20:30 on Feb 12, 2020 |
# ? Feb 12, 2020 20:28 |
|
Nomnom Cookie posted:load balancing and canaries. basically, threading a single end user request back through a load balancer 10 times because of microservices is obviously horrible. service mesh isn’t obviously horrible well it definitely seems like more of the former! love too pass all dataplane traffic through userspace
|
# ? Feb 12, 2020 21:41 |
|
Progressive JPEG posted:istio specifically is google trying to regain control over (some aspect of) k8s regardless of whether the functionality even makes sense at that layer welp thanks for turning me off to istio agreed that service meshes are smarter than dealing with a mess of load balancers etc but dang. consul connect looks interesting but i no longer use many hashicorp products in infrastructure outside of terraform.
|
# ? Feb 12, 2020 21:58 |
|
istio is a stillsuit your app can wear that protects it from the dry and hostile environment of kubernetes
|
# ? Feb 12, 2020 22:17 |
|
Ploft-shell crab posted:any of y’all tenants trying to get you to run a god dang “service mesh”? idk what real problems these things are trying to solve, I think they’re just inventing stuff for themselves to do worse, i work for a company that PRODUCES a service mesh
|
# ? Feb 13, 2020 04:44 |
|
service meshes are for when you've given up on your developers giving a gently caress about monitoring, reliability or observability
|
# ? Mar 2, 2020 02:41 |
the talent deficit posted:service meshes are for when you've given up on your developers giving a gently caress about monitoring, reliability or observability sup
|
|
# ? Mar 2, 2020 03:26 |
|
the talent deficit posted:service meshes are for when you've given up on your developers giving a gently caress about monitoring, reliability or observability its something the platform team can roll out to improve poo poo across the board without messing with product teams. so like yes you're right but also this is how we want it i guess
|
# ? Mar 2, 2020 05:52 |
|
my wacky local ISP double-NATs me but will sell me a public IPv4 address for $5/mo. when I asked to just get an IPv6 allocation they told me that they aren't there yet, but I could save money and get a NordVPN account for $3/mo.
|
# ? Mar 2, 2020 08:19 |
|
Jimmy Carter posted:my wacky local ISP double-NATs me but will sell me a public IPv4 address for $5/mo. More like IPv6000 years to implement!! We had a full ipv6 dual stack deployment at a relatively large place and it legitimately didn't cause many issues and any they did were purely server/client implementation related. Why an ISP wouldn't already provide it I have nfi.
|
# ? Mar 2, 2020 08:34 |
|
the talent deficit posted:service meshes are for when you've given up on your developers giving a gently caress about monitoring, reliability or observability me when i hear customers ask "can't we just log the whole request body cause otherwise we won't be able to figure out what went wrong with our apps" if the only way you can figure out what went wrong in upstream applications is logging the full request body to try and reconstruct the problem, you have bigger problems than this will solve
|
# ? Mar 2, 2020 09:33 |
|
Nomnom Cookie posted:its something the platform team can roll out to improve poo poo across the board without messing with product teams. so like yes you're right but also this is how we want it i guess i mean i get it but it's like installing inflatable bumpers along roadsides because drivers keep driving off the road. it's a terrible solution to a terrible problem that has a much simpler solution (ban cars/microservices)
|
# ? Mar 2, 2020 19:33 |
|
i would simply intentionally design systems instead of cobble together whatever shits laying around or sounds interesting until something resembling a usable outcome occurs
|
# ? Mar 2, 2020 20:52 |
|
akadajet posted:We're on Azure lmao. my goondolensces
|
# ? Mar 2, 2020 21:33 |
|
the talent deficit posted:i mean i get it but it's like installing inflatable bumpers along roadsides because drivers keep driving off the road. it's a terrible solution to a terrible problem that has a much simpler solution (ban cars/microservices) as a result of breaking a bunch of services out from a big ol monolith, our half-dozen or so product teams can deploy independently, and more importantly can rollback independently. rolling out linkerd so we can do canaries between the services is treating a self-inflicted wound, but pulling everything back into a single process would be even worse. if you have a Third Way architecture the resolves all these issues, please do share it and I will be happy to present it as my own at work and collect the kudos for solving a pretty significant problem we're facing
|
# ? Mar 3, 2020 00:16 |
|
Bloody posted:i would simply intentionally design systems instead of cobble together whatever shits laying around or sounds interesting until something resembling a usable outcome occurs sir this is a wendies drive through
|
# ? Mar 3, 2020 04:37 |
|
i am going to destroy a cisco 4510 with a car battery tomorrow, im going to smash it to pieces with my coworker in a parking lot and none of you can stop me
|
# ? Mar 3, 2020 06:31 |
|
abigserve posted:More like IPv6000 years to implement!! I should mention my provider employs 8 people, and when I called and asked for Tech Support I got their lead network engineer's cellphone and they had zero problems with me re-doing the punchdowns on the patch panel in my unit. It's honestly refreshing when your ISP's customer service strategy is 'game recognize game'.
|
# ? Mar 3, 2020 08:15 |
|
Jbz posted:i am going to destroy a cisco 4510 with a car battery tomorrow, im going to smash it to pieces with my coworker in a parking lot and none of you can stop me good
|
# ? Mar 3, 2020 14:56 |
|
I did not destroy the switch, instead it was simply Fixed when I returned to work.
|
# ? Mar 3, 2020 23:45 |
|
google charging per gke cluster now https://cloud.google.com/kubernetes-engine/pricing gillette boss: these razors are selling like hotcakes, we'd be idiots not to raise the price!
|
# ? Mar 4, 2020 22:00 |
|
well more like, idk, harrys
|
# ? Mar 4, 2020 22:06 |
|
hate to be a conspiracy dork but i really get the impression that google is trying to kill GCP
|
# ? Mar 4, 2020 22:28 |
|
suffix posted:google charging per gke cluster now https://cloud.google.com/kubernetes-engine/pricing lol what are you doing that this matters. $75/cluster/mo is basically nothing in any sane k8s deployment scenario
|
# ? Mar 4, 2020 23:13 |
|
psiox posted:hate to be a conspiracy dork but i really get the impression that google is trying to kill GCP well yeah after a few years of a google product existing, googlers stop finding ways to use it to get promoted
|
# ? Mar 4, 2020 23:18 |
|
Nomnom Cookie posted:lol we use it primarily to test and develop k8s tooling in a realistic environment, so it's just 3 workers and the extra cost is significant for that
|
# ? Mar 5, 2020 05:34 |
|
while the per-hour cost isn't that crazy, i thought that anybody using kubernetes is actually running N^2 kubernetes cluster instances to test that their poo poo won't break every day it feels like it's just the new openstack
|
# ? Mar 5, 2020 05:42 |
|
psiox posted:while the per-hour cost isn't that crazy, i thought that anybody using kubernetes is actually running N^2 kubernetes cluster instances to test that their poo poo won't break we’re running 6 clusters, but the EKS fees on 36 clusters would still be less than 5% of our overall spend. still at the “this is not what bankrupts us” level CMYK BLYAT! posted:we use it primarily to test and develop k8s tooling in a realistic environment, so it's just 3 workers and the extra cost is significant for that I’m not sure how you get from “large number of tiny clusters” to “our testing is happening in a realistic environment”. I assume you have a large number of clusters, anyway, because otherwise why gaf
|
# ? Mar 5, 2020 05:50 |
|
abigserve posted:More like IPv6000 years to implement!! a bunch of stuff didn't support SLAAC+DHCPv6 PD for ages or required new hardware also on the cisco 9k agg platform doing dual stack halves your qos queue capacity as each protocol uses a queue slot also old network engineers refusing to learn new things
|
# ? Mar 5, 2020 06:54 |
|
my stepdads beer posted:a bunch of stuff didn't support SLAAC+DHCPv6 PD for ages or required new hardware also implementing IPv6 provides zero new revenue so it's the lowest possible priority even if it is possible. now that the alternative is CGNAT it sort of has a business case but most ISPs still don't GAF.
|
# ? Mar 5, 2020 13:35 |
|
whats a good san I can put cheap consumer ssds in
|
# ? Apr 14, 2020 05:02 |
|
a garbage can
|
# ? Apr 14, 2020 05:34 |
|
my rear end
|
# ? Apr 14, 2020 08:34 |
|
fill a small nas with 'em. I assume for home use, you can get a mini-itx case with like 8 drive slots (at least, there's probably even bigger ones)
|
# ? Apr 14, 2020 12:33 |
|
idk why youd want to build a home nas with ssds when 5400 rpm spinners are perfectly cromulent for serving your plex media
|
# ? Apr 14, 2020 19:34 |
|
abigserve posted:fill a small nas with 'em. I assume for home use, you can get a mini-itx case with like 8 drive slots (at least, there's probably even bigger ones) just a thought exercise to see how cheap it could be vs a HPE MSA or whatever
|
# ? Apr 15, 2020 06:30 |
|
CMYK BLYAT! posted:we use it primarily to test and develop k8s tooling in a realistic environment, so it's just 3 workers and the extra cost is significant for that So per hour charging should be ideal? Test on some garbage VM then when hitting final round of QA spin up a prod instance and shutdown.
|
# ? Apr 19, 2020 00:04 |
|
|
# ? May 19, 2024 22:01 |
|
abigserve posted:fill a small nas with 'em. I assume for home use, you can get a mini-itx case with like 8 drive slots (at least, there's probably even bigger ones) With SSDs you can just leave 'em banging around loose inside the case The absolute cheapest option would be whatever PC you have + a Dell PERC H310 (or some other raid card that has or can be flashed to JBOD mode) + a rat's nest of SSDs
|
# ? Apr 19, 2020 01:07 |