Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methanar
Sep 26, 2013

by the sex ghost

Docjowles posted:

You have just answered your own question

8pm vs 11 pm posting

Adbot
ADBOT LOVES YOU

amethystdragon
Sep 14, 2019

Methanar posted:

8pm vs 11 pm posting

Wisdom that comes with an additional 3 hours of age

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

amethystdragon posted:

Wisdom that comes with an additional 3 hours of age
The whole world changed on 9/11

joebuddah
Jan 30, 2005
What I'm trying to do is input usb/ serial data from a flexport device. Through a nodered/ node.js server. The host system is windows.I can see the data in putty and realterm.

I am running it node as root, and have added the device's ttys0 -s4. I have checked in the bash and the node server has read and right access to the devices. I have installed the serial package for node red. However I can not get any data to go into nodered. Do I need to manually open the serial port or does node listen? If I need to open the com port manually how do I do that

Hadlock
Nov 9, 2004

Vulture Culture posted:

The whole world changed on 9/11

You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference.

The ridiculousness of taking your shoes off going through security (which no other country requires, btw) was a separate event after the fact

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hadlock posted:

You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference.

The ridiculousness of taking your shoes off going through security (which no other country requires, btw) was a separate event after the fact
Actually,

Necronomicon
Jan 18, 2004

Hadlock posted:

You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference.

The ridiculousness of taking your shoes off going through security (which no other country requires, btw) was a separate event after the fact

when was the last time you were on a plane?

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe

Hadlock posted:

You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference.

The ridiculousness of taking your shoes off going through security (which no other country requires, btw) was a separate event after the fact

Says the white person. Before 9/11 you could cross the US/Mexico border by simply saying the magical phrase "I'm a US Citizen" I did this multiple times as a teenager living illegally in the US to go visit family back in Mexico.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Janitor Prime posted:

Says the white person. Before 9/11 you could cross the US/Mexico border by simply saying the magical phrase "I'm a US Citizen" I did this multiple times as a teenager living illegally in the US to go visit family back in Mexico.

Wait, so what was the magic phrase on the way back?

PBS
Sep 21, 2015

Hadlock posted:

You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference.

The ridiculousness of taking your shoes off going through security (which no other country requires, btw) was a separate event after the fact

My regular 20/30 minute wait in lines begs to differ. Walking though a metal detector was fast and they barely looked at luggage under an x-ray.

Every airport I've been in has significantly increased the amount of equipment they have and the number of lines in order to have a lower total bandwidth than pre 9/11.

I also didn't have to choose between a millimeter wave scanner and some rando molesting me previously.

Methanar
Sep 26, 2013

by the sex ghost
airports are cool because they concentrate everyone into large crowds before the bombs would be found at security.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Methanar posted:

airports are cool because they concentrate everyone into large crowds before the bombs would be found at security.
Baggage claim, also, as we learned in Ft. Lauderdale

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
on one hand security theater is intrusive and doesn’t help, on the other, a bunch of contractors and consultants and so forth got a lot of lucrative contracts out of the deal so who can say if it’s bad or not

I’ll leave it to the reader to discover the applicability of this metaphor to working in software

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe

NihilCredo posted:

Wait, so what was the magic phrase on the way back?

Going into Mexico you didn't say poo poo, coming back all you had to say was "I'm a US citizen" without a thick a accent.

bergeoisie
Aug 29, 2004
How do folks handle load spreading across their k8s apiservers? We have an issue where the load can become concentrated on a single apiserver, that apiserver starts throwing 429s and then the clients just...keep doing their normal thing. There an unscheduled ticket on it in the k8s repo, but it's bad enough in practice that I feel like other people must have worked around this by now.

Methanar
Sep 26, 2013

by the sex ghost

bergeoisie posted:

How do folks handle load spreading across their k8s apiservers? We have an issue where the load can become concentrated on a single apiserver, that apiserver starts throwing 429s and then the clients just...keep doing their normal thing. There an unscheduled ticket on it in the k8s repo, but it's bad enough in practice that I feel like other people must have worked around this by now.

Put a load balancer in front

bergeoisie
Aug 29, 2004

Methanar posted:

Put a load balancer in front

We have one, but it's not sufficient because the k8s clients try really really hard to never renegotiate a connection unless they absolutely have to. We tend to see this behavior when doing rolling restarts of the apiservers. Load can get shifted quickly and one of the apiservers can get unlucky.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
Simply disable all keepalive connections whatsoever, the tried and true solution to all HTTP-related problems.

crazypenguin
Mar 9, 2005
nothing witty here, move along
Don’t http load balancers distribute requests from a single connection across servers? It seems like something they would do...

JHVH-1
Jun 28, 2002

crazypenguin posted:

Don’t http load balancers distribute requests from a single connection across servers? It seems like something they would do...

If it connects and is kept alive its not going to go anywhere because its still the same connection.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Most load balancers understand state and things get dicey when a node goes down taking a connection with it that wasn’t replicable or recoverable properly. Haven’t seen a system that distributes a single TCP flow / connection concurrently over multiple physical hosts without some serious TCP black magic but I presume it exists and I’m just ignorant.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

necrobobsledder posted:

Most load balancers understand state and things get dicey when a node goes down taking a connection with it that wasn’t replicable or recoverable properly. Haven’t seen a system that distributes a single TCP flow / connection concurrently over multiple physical hosts without some serious TCP black magic but I presume it exists and I’m just ignorant.

I don't think so... I mean, I'm sure somebody somewhere did it and it's an awful mess but based on the protocol itself, I don't think that would function very well. You can't just tell another computer, hey continue this TCP Stream at this SRC/DST port, rewrite the IP section of the header for destination and off you go. There's just so much setup with TCP that I can't logically see this as functional.. ever..

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

bergeoisie posted:

How do folks handle load spreading across their k8s apiservers? We have an issue where the load can become concentrated on a single apiserver, that apiserver starts throwing 429s and then the clients just...keep doing their normal thing. There an unscheduled ticket on it in the k8s repo, but it's bad enough in practice that I feel like other people must have worked around this by now.

Is your primary source of traffic from the control plane (kubelet, scheduler, controller manager) or from external applications connecting to the API servers?

Docjowles
Apr 9, 2009

You can definitely do this with a layer 7 load balancer. It was a configurable option on the piece of poo poo A10s we recently got rid of. The client makes its persistent TCP connection with the LB. The LB maintains its own set of TCP connections with each backend server. It inspects the client headers and each individual HTTP request in that session is round-robined across the servers. There isn’t really any fuckery involved at the TCP/IP layer.

FamDav
Mar 29, 2008

Docjowles posted:

You can definitely do this with a layer 7 load balancer. It was a configurable option on the piece of poo poo A10s we recently got rid of. The client makes its persistent TCP connection with the LB. The LB maintains its own set of TCP connections with each backend server. It inspects the client headers and each individual HTTP request in that session is round-robined across the servers. There isn’t really any fuckery involved at the TCP/IP layer.

This is how ALBs work as well. You can optionally choose sticky sessions to explicitly avoid redistribution.

For l4, there are plenty of ways to have the load balancer keep highly available flows by distributing flow decisions across hosts, but you’d be hard pressed to redistribute a tcp flow across back ends without it being shadow traffic.

Pile Of Garbage
May 28, 2007



A lot of dedicated LB appliances like Citrix NetScaler's support monitors that can be associated with backend services to affect LB behaviour. These can range from a simple HTTP GET to a backend service's health check probe endpoint to see if it's up to more advanced custom load/user monitors.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

Docjowles posted:

You can definitely do this with a layer 7 load balancer. It was a configurable option on the piece of poo poo A10s we recently got rid of. The client makes its persistent TCP connection with the LB. The LB maintains its own set of TCP connections with each backend server. It inspects the client headers and each individual HTTP request in that session is round-robined across the servers. There isn’t really any fuckery involved at the TCP/IP layer.

The original suggestion:

necro posted:

Haven’t seen a system that distributes a single TCP flow / connection concurrently over multiple physical hosts without some serious TCP black magic but I presume it exists and I’m just ignorant.

Not talking about breaking up a HTTP session, talking about breaking up a single TCP Session. One is feasible (and done often) the other is not!

Methanar
Sep 26, 2013

by the sex ghost
The community prometheus operator helm chart is an abomination. I shouldn't have to deal with all of this special non-standard app-specific generation magic bullshit to do simple things. I actually still don't know how I would properly figure out how to do this aside from scraping together partial answers posted to github issues.

I'm at least 3 layers of indirection deep with using helmfiles to template value files for rendering the actual chart templates which are consumed as a CRD as a config for something entirely not Kubernetes which will generate deployment definitions for me in ways I don't understand.

Mao Zedong Thot
Oct 16, 2008


Methanar posted:

The community prometheus operator helm chart is an abomination. I shouldn't have to deal with all of this special non-standard app-specific generation magic bullshit to do simple things. I actually still don't know how I would properly figure out how to do this aside from scraping together partial answers posted to github issues.

I'm at least 3 layers of indirection deep with using helmfiles to template value files for rendering the actual chart templates which are consumed as a CRD as a config for something entirely not Kubernetes which will generate deployment definitions for me in ways I don't understand.

Helm is a pretty big mistake overall, tbh

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
flux good tho?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Methanar posted:

The community prometheus operator helm chart is an abomination. I shouldn't have to deal with all of this special non-standard app-specific generation magic bullshit to do simple things. I actually still don't know how I would properly figure out how to do this aside from scraping together partial answers posted to github issues.

I'm at least 3 layers of indirection deep with using helmfiles to template value files for rendering the actual chart templates which are consumed as a CRD as a config for something entirely not Kubernetes which will generate deployment definitions for me in ways I don't understand.
That's sort of the deal with Kubernetes operators in general though, Helm just adds another layer of "cool, now there's templates on top of this dumpster fire"

Methanar
Sep 26, 2013

by the sex ghost

Vulture Culture posted:

That's sort of the deal with Kubernetes operators in general though, Helm just adds another layer of "cool, now there's templates on top of this dumpster fire"

Its really cool that if you oom prometheus it actually hard spinlocks itself to the point you can't ssh into the node. Had that happen like, 5 times now.

Also CRDs are fantastic because they aren't deleted on chart delete, but you will fail to re-apply the chart from scratch because the CRDs exist already. If you delete the CRDs, serviceMonitor and podMonitor API objects are suddenly undefined and will be deleted along with their CRD.

So now ontop of creating prometheus, you need to now go ahead and reapply all of the serviceMonitor definitions which means running the other helm charts that make those.


Helm was a mistake. Operators were a mistake. Kubernetes was a mistake.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Methanar posted:

Kubernetes was a mistake.
Marc Burgess has some interesting ideas about how top-down orchestration is a fundamentally broken idea and the only declarative model that can actually work is strong promise-driven actors.

Of course, they're buried in a 500 page draft:
http://markburgess.org/treatise_vol2.pdf

Methanar posted:

Operators were a mistake.
Unquestionably. They're the Oracle ClusterWare of the container wave.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Whereas, I think cfengine (of which he was the author) and chef and promise theory application onto individual servers in general as fundamentally flawed and outdated ways of managing systems. But i'm not going to write a 500 page treatise about it

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
Kubernetes good. Kubernetes users bad. Organizations bad.

You, a software vendor, release an application, and the application runs reasonably well within Kubernetes despite needing to work around some pre-Kubernetes design decisions. If you are familiar with the building blocks used to create Kubernetes deployments, you can likely figure these out.

Your users, however, don't know Kubernetes, and don't have the slightest idea how to start writing a manifest. They fly into a panic the instant anything goes wrong, and are paralyzed with fear of the new and confusing Kubernetes landscape. Checking application logs or making the most basic effort to interpret error messages (what does "could not find Secret foobar" mean? AN IMPOSSIBLE MYSTERY) is an insurmountable, herculean task--although they're what sysadmindevopswordsalad people have done for eons, those same tasks are now treated as inscrutable dark magic once kubectl is involved. Unfortunately, your users' management cares little about whether their reports are able to effectively manage applications in Kubernetes and has no interest in providing them with training: a big important CTO type has issued a mandate to move onto Kubernetes, and by god, management is going to demonstrate its ability to deliver on that mandate through traditional poo poo management practice of "yell at your employees to do poo poo faster". Challenges and poor planning be damned, the deadline WILL be met.

Enter Helm. Helm is a fantastic, if flawed tool for expediting many things an experienced Kubernetes admin learns they need to do when managing deployments manually. Know that you'll change basically three values and leave the rest of the manifest the same? Great! You can stick those in a brief file and have them inserted at the correct location! Need to run something before starting an upgrade rollout? We got lifecycle hooks! Need to create the same three mostly identical pieces of configuration for each configmap you add? Template loops!

Sadly, the aforementioned users treat Helm as an easy button. Someone already wrote most of the manifest and provided a concise set of things they probably will need to change, so it's no longer necessary, from their point of view, to understand what the rest of the manifest is doing, why it's doing it, or how to fix it when it goes wrong. Hardly any will try to modify the templates to add features their particular environment needs; everything is submitted back to the vendor to implement. Users treat Helm as another layer of abstraction, where it's actually augmenting an existing abstraction layer and doesn't excuse you from understanding that layer. Those augmentations add further complexity alongside the existing abstraction layer, so you need to know more about what you're doing to use them effectively, not less.

Methanar
Sep 26, 2013

by the sex ghost

CMYK BLYAT! posted:

Kubernetes good. Kubernetes users bad. Organizations bad.

You, a software vendor, release an application, and the application runs reasonably well within Kubernetes despite needing to work around some pre-Kubernetes design decisions. If you are familiar with the building blocks used to create Kubernetes deployments, you can likely figure these out.

Your users, however, don't know Kubernetes, and don't have the slightest idea how to start writing a manifest. They fly into a panic the instant anything goes wrong, and are paralyzed with fear of the new and confusing Kubernetes landscape. Checking application logs or making the most basic effort to interpret error messages (what does "could not find Secret foobar" mean? AN IMPOSSIBLE MYSTERY) is an insurmountable, herculean task--although they're what sysadmindevopswordsalad people have done for eons, those same tasks are now treated as inscrutable dark magic once kubectl is involved. Unfortunately, your users' management cares little about whether their reports are able to effectively manage applications in Kubernetes and has no interest in providing them with training: a big important CTO type has issued a mandate to move onto Kubernetes, and by god, management is going to demonstrate its ability to deliver on that mandate through traditional poo poo management practice of "yell at your employees to do poo poo faster". Challenges and poor planning be damned, the deadline WILL be met.

Enter Helm. Helm is a fantastic, if flawed tool for expediting many things an experienced Kubernetes admin learns they need to do when managing deployments manually. Know that you'll change basically three values and leave the rest of the manifest the same? Great! You can stick those in a brief file and have them inserted at the correct location! Need to run something before starting an upgrade rollout? We got lifecycle hooks! Need to create the same three mostly identical pieces of configuration for each configmap you add? Template loops!

Sadly, the aforementioned users treat Helm as an easy button. Someone already wrote most of the manifest and provided a concise set of things they probably will need to change, so it's no longer necessary, from their point of view, to understand what the rest of the manifest is doing, why it's doing it, or how to fix it when it goes wrong. Hardly any will try to modify the templates to add features their particular environment needs; everything is submitted back to the vendor to implement. Users treat Helm as another layer of abstraction, where it's actually augmenting an existing abstraction layer and doesn't excuse you from understanding that layer. Those augmentations add further complexity alongside the existing abstraction layer, so you need to know more about what you're doing to use them effectively, not less.

actually kubernetes is bad.

And the last thing I will ever want is for users to be doing is writing helm beyond copypasting in their app specific values into the standard pre-formatted helm values file.

Every dev used to manage and write their own chef cookbooks to do that and that was really, really, really bad.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

CMYK BLYAT! posted:

Kubernetes good. Kubernetes users bad. Organizations bad.

You, a software vendor, release an application, and the application runs reasonably well within Kubernetes despite needing to work around some pre-Kubernetes design decisions. If you are familiar with the building blocks used to create Kubernetes deployments, you can likely figure these out.

Your users, however, don't know Kubernetes, and don't have the slightest idea how to start writing a manifest. They fly into a panic the instant anything goes wrong, and are paralyzed with fear of the new and confusing Kubernetes landscape. Checking application logs or making the most basic effort to interpret error messages (what does "could not find Secret foobar" mean? AN IMPOSSIBLE MYSTERY) is an insurmountable, herculean task--although they're what sysadmindevopswordsalad people have done for eons, those same tasks are now treated as inscrutable dark magic once kubectl is involved. Unfortunately, your users' management cares little about whether their reports are able to effectively manage applications in Kubernetes and has no interest in providing them with training: a big important CTO type has issued a mandate to move onto Kubernetes, and by god, management is going to demonstrate its ability to deliver on that mandate through traditional poo poo management practice of "yell at your employees to do poo poo faster". Challenges and poor planning be damned, the deadline WILL be met.

Enter Helm. Helm is a fantastic, if flawed tool for expediting many things an experienced Kubernetes admin learns they need to do when managing deployments manually. Know that you'll change basically three values and leave the rest of the manifest the same? Great! You can stick those in a brief file and have them inserted at the correct location! Need to run something before starting an upgrade rollout? We got lifecycle hooks! Need to create the same three mostly identical pieces of configuration for each configmap you add? Template loops!

Sadly, the aforementioned users treat Helm as an easy button. Someone already wrote most of the manifest and provided a concise set of things they probably will need to change, so it's no longer necessary, from their point of view, to understand what the rest of the manifest is doing, why it's doing it, or how to fix it when it goes wrong. Hardly any will try to modify the templates to add features their particular environment needs; everything is submitted back to the vendor to implement. Users treat Helm as another layer of abstraction, where it's actually augmenting an existing abstraction layer and doesn't excuse you from understanding that layer. Those augmentations add further complexity alongside the existing abstraction layer, so you need to know more about what you're doing to use them effectively, not less.

If you as an operations engineer can't provide easy to use tools for product engineers to release and run their code then you've failed in your job.

Kubernetes is a layer of abstraction that makes deployment and operations by users more complex than it needs to be and provides so many moving parts that touching it in the wrong way can cause cascading issues in places a user couldn't possibly imagine.

Golden image worked perfectly and was extremely simple, easy to reason about failure cases, and scaled nicely. I have no idea why people needed to completely reinvent the wheel.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Blinkz0rz posted:

If you as an operations engineer can't provide easy to use tools for product engineers to release and run their code then you've failed in your job.

These /are/ operations engineers, just operations engineers that can't be arsed to learn anything new if it requires more than three steps that require no decisions. Those aren't all of our users--some are quite capable of tuning things on their own or adding onto a stock template--but they're a nonzero fraction of our customers and good at throwing "we're paying you $BIGX so just do it the way we want" bricks through windows. Management on our end typically acquiesces because they haven't figured out whether we're a professional services bespoke software consultancy or not, and reliably choose "quick thing that makes one customer stop complaining now" versus "actually think about the problem and how a solution that works for one customer might box us into a corner down the road".

I'd love for things to be simpler, I really would. Sadly, we don't have infinite time and resources to try and abstract away every possible decision that's necessary when deploying complex, rapidly-evolving software into every conceivable (and often badly-designed) network architecture.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

CMYK BLYAT! posted:

I'd love for things to be simpler, I really would. Sadly, we don't have infinite time and resources to try and abstract away every possible decision that's necessary when deploying complex, rapidly-evolving software into every conceivable (and often badly-designed) network architecture.

This is sort of the crux of my argument against Kubernetes. 99% of applications deployed in k8s don't require the complexity that k8s brings. It's resume-driven development for ops teams and it loving sucks to be on the product eng side when dealing with it.

Adbot
ADBOT LOVES YOU

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
Some things -do- require that level of complexity and tuning though. Our products, for lack of more detailed description, work at lower layer than a lot of applications deployed to Kubernetes typically do, and when deployed effectively, can help smooth complexity for other applications in the cluster. The cost, however, is that users need to understand how and why they're doing that smoothing, and how the same might apply to the thing they're smoothing with.

Industry actors want an airplane that can be operated by an idiot babby to get from A to B, but have parts bins full of various engines, chassis, and fuel. They have observed that other actors have successfully built airplanes to great success, and want to build their own, and elocute a very Picardian "make it so" directive to their subordinates, who have never built an airplane before but somewhat understand how the components work from their previous experience working on trains. Somehow, however, they jettison much of their knowledge and return to basic experimentation, and try to make their fully-electric plane fly by filling it with diesel fuel, or install a engine designed to power a golf cart into an airframe. This fails, and everyone roundly concludes that airplanes are impossible and that horse-drawn carriages are the only viable transportation option.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply