|
Docjowles posted:You have just answered your own question 8pm vs 11 pm posting
|
# ? Nov 21, 2019 09:25 |
|
|
# ? May 16, 2024 11:36 |
|
Methanar posted:8pm vs 11 pm posting Wisdom that comes with an additional 3 hours of age
|
# ? Nov 23, 2019 07:37 |
|
amethystdragon posted:Wisdom that comes with an additional 3 hours of age
|
# ? Nov 23, 2019 17:09 |
|
What I'm trying to do is input usb/ serial data from a flexport device. Through a nodered/ node.js server. The host system is windows.I can see the data in putty and realterm. I am running it node as root, and have added the device's ttys0 -s4. I have checked in the bash and the node server has read and right access to the devices. I have installed the serial package for node red. However I can not get any data to go into nodered. Do I need to manually open the serial port or does node listen? If I need to open the com port manually how do I do that
|
# ? Nov 23, 2019 17:42 |
|
Vulture Culture posted:The whole world changed on 9/11 You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference. The ridiculousness of taking your shoes off going through security (which no other country requires, btw) was a separate event after the fact
|
# ? Nov 23, 2019 21:59 |
|
Hadlock posted:You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference.
|
# ? Nov 23, 2019 23:00 |
|
Hadlock posted:You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference. when was the last time you were on a plane?
|
# ? Nov 25, 2019 21:41 |
|
Hadlock posted:You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference. Says the white person. Before 9/11 you could cross the US/Mexico border by simply saying the magical phrase "I'm a US Citizen" I did this multiple times as a teenager living illegally in the US to go visit family back in Mexico.
|
# ? Nov 26, 2019 00:43 |
|
Janitor Prime posted:Says the white person. Before 9/11 you could cross the US/Mexico border by simply saying the magical phrase "I'm a US Citizen" I did this multiple times as a teenager living illegally in the US to go visit family back in Mexico. Wait, so what was the magic phrase on the way back?
|
# ? Nov 26, 2019 01:48 |
|
Hadlock posted:You can no longer meet your family at the gate when their plane lands at your city. That's about the only recognizable difference. My regular 20/30 minute wait in lines begs to differ. Walking though a metal detector was fast and they barely looked at luggage under an x-ray. Every airport I've been in has significantly increased the amount of equipment they have and the number of lines in order to have a lower total bandwidth than pre 9/11. I also didn't have to choose between a millimeter wave scanner and some rando molesting me previously.
|
# ? Nov 26, 2019 05:05 |
|
airports are cool because they concentrate everyone into large crowds before the bombs would be found at security.
|
# ? Nov 26, 2019 06:22 |
|
Methanar posted:airports are cool because they concentrate everyone into large crowds before the bombs would be found at security.
|
# ? Nov 27, 2019 15:29 |
|
on one hand security theater is intrusive and doesn’t help, on the other, a bunch of contractors and consultants and so forth got a lot of lucrative contracts out of the deal so who can say if it’s bad or not I’ll leave it to the reader to discover the applicability of this metaphor to working in software
|
# ? Nov 27, 2019 15:32 |
|
NihilCredo posted:Wait, so what was the magic phrase on the way back? Going into Mexico you didn't say poo poo, coming back all you had to say was "I'm a US citizen" without a thick a accent.
|
# ? Nov 27, 2019 17:04 |
|
How do folks handle load spreading across their k8s apiservers? We have an issue where the load can become concentrated on a single apiserver, that apiserver starts throwing 429s and then the clients just...keep doing their normal thing. There an unscheduled ticket on it in the k8s repo, but it's bad enough in practice that I feel like other people must have worked around this by now.
|
# ? Dec 4, 2019 07:10 |
|
bergeoisie posted:How do folks handle load spreading across their k8s apiservers? We have an issue where the load can become concentrated on a single apiserver, that apiserver starts throwing 429s and then the clients just...keep doing their normal thing. There an unscheduled ticket on it in the k8s repo, but it's bad enough in practice that I feel like other people must have worked around this by now. Put a load balancer in front
|
# ? Dec 4, 2019 07:32 |
|
Methanar posted:Put a load balancer in front We have one, but it's not sufficient because the k8s clients try really really hard to never renegotiate a connection unless they absolutely have to. We tend to see this behavior when doing rolling restarts of the apiservers. Load can get shifted quickly and one of the apiservers can get unlucky.
|
# ? Dec 4, 2019 07:55 |
|
Simply disable all keepalive connections whatsoever, the tried and true solution to all HTTP-related problems.
|
# ? Dec 4, 2019 08:48 |
|
Don’t http load balancers distribute requests from a single connection across servers? It seems like something they would do...
|
# ? Dec 4, 2019 16:15 |
|
crazypenguin posted:Don’t http load balancers distribute requests from a single connection across servers? It seems like something they would do... If it connects and is kept alive its not going to go anywhere because its still the same connection.
|
# ? Dec 4, 2019 16:49 |
|
Most load balancers understand state and things get dicey when a node goes down taking a connection with it that wasn’t replicable or recoverable properly. Haven’t seen a system that distributes a single TCP flow / connection concurrently over multiple physical hosts without some serious TCP black magic but I presume it exists and I’m just ignorant.
|
# ? Dec 4, 2019 19:08 |
|
necrobobsledder posted:Most load balancers understand state and things get dicey when a node goes down taking a connection with it that wasn’t replicable or recoverable properly. Haven’t seen a system that distributes a single TCP flow / connection concurrently over multiple physical hosts without some serious TCP black magic but I presume it exists and I’m just ignorant. I don't think so... I mean, I'm sure somebody somewhere did it and it's an awful mess but based on the protocol itself, I don't think that would function very well. You can't just tell another computer, hey continue this TCP Stream at this SRC/DST port, rewrite the IP section of the header for destination and off you go. There's just so much setup with TCP that I can't logically see this as functional.. ever..
|
# ? Dec 4, 2019 22:55 |
|
bergeoisie posted:How do folks handle load spreading across their k8s apiservers? We have an issue where the load can become concentrated on a single apiserver, that apiserver starts throwing 429s and then the clients just...keep doing their normal thing. There an unscheduled ticket on it in the k8s repo, but it's bad enough in practice that I feel like other people must have worked around this by now. Is your primary source of traffic from the control plane (kubelet, scheduler, controller manager) or from external applications connecting to the API servers?
|
# ? Dec 4, 2019 23:39 |
|
You can definitely do this with a layer 7 load balancer. It was a configurable option on the piece of poo poo A10s we recently got rid of. The client makes its persistent TCP connection with the LB. The LB maintains its own set of TCP connections with each backend server. It inspects the client headers and each individual HTTP request in that session is round-robined across the servers. There isn’t really any fuckery involved at the TCP/IP layer.
|
# ? Dec 7, 2019 13:50 |
|
Docjowles posted:You can definitely do this with a layer 7 load balancer. It was a configurable option on the piece of poo poo A10s we recently got rid of. The client makes its persistent TCP connection with the LB. The LB maintains its own set of TCP connections with each backend server. It inspects the client headers and each individual HTTP request in that session is round-robined across the servers. There isn’t really any fuckery involved at the TCP/IP layer. This is how ALBs work as well. You can optionally choose sticky sessions to explicitly avoid redistribution. For l4, there are plenty of ways to have the load balancer keep highly available flows by distributing flow decisions across hosts, but you’d be hard pressed to redistribute a tcp flow across back ends without it being shadow traffic.
|
# ? Dec 8, 2019 05:23 |
|
A lot of dedicated LB appliances like Citrix NetScaler's support monitors that can be associated with backend services to affect LB behaviour. These can range from a simple HTTP GET to a backend service's health check probe endpoint to see if it's up to more advanced custom load/user monitors.
|
# ? Dec 8, 2019 09:56 |
|
Docjowles posted:You can definitely do this with a layer 7 load balancer. It was a configurable option on the piece of poo poo A10s we recently got rid of. The client makes its persistent TCP connection with the LB. The LB maintains its own set of TCP connections with each backend server. It inspects the client headers and each individual HTTP request in that session is round-robined across the servers. There isn’t really any fuckery involved at the TCP/IP layer. The original suggestion: necro posted:Haven’t seen a system that distributes a single TCP flow / connection concurrently over multiple physical hosts without some serious TCP black magic but I presume it exists and I’m just ignorant. Not talking about breaking up a HTTP session, talking about breaking up a single TCP Session. One is feasible (and done often) the other is not!
|
# ? Dec 9, 2019 08:50 |
|
The community prometheus operator helm chart is an abomination. I shouldn't have to deal with all of this special non-standard app-specific generation magic bullshit to do simple things. I actually still don't know how I would properly figure out how to do this aside from scraping together partial answers posted to github issues. I'm at least 3 layers of indirection deep with using helmfiles to template value files for rendering the actual chart templates which are consumed as a CRD as a config for something entirely not Kubernetes which will generate deployment definitions for me in ways I don't understand.
|
# ? Dec 11, 2019 00:57 |
|
Methanar posted:The community prometheus operator helm chart is an abomination. I shouldn't have to deal with all of this special non-standard app-specific generation magic bullshit to do simple things. I actually still don't know how I would properly figure out how to do this aside from scraping together partial answers posted to github issues. Helm is a pretty big mistake overall, tbh
|
# ? Dec 11, 2019 05:34 |
|
flux good tho?
|
# ? Dec 11, 2019 06:31 |
|
Methanar posted:The community prometheus operator helm chart is an abomination. I shouldn't have to deal with all of this special non-standard app-specific generation magic bullshit to do simple things. I actually still don't know how I would properly figure out how to do this aside from scraping together partial answers posted to github issues.
|
# ? Dec 14, 2019 07:27 |
|
Vulture Culture posted:That's sort of the deal with Kubernetes operators in general though, Helm just adds another layer of "cool, now there's templates on top of this dumpster fire" Its really cool that if you oom prometheus it actually hard spinlocks itself to the point you can't ssh into the node. Had that happen like, 5 times now. Also CRDs are fantastic because they aren't deleted on chart delete, but you will fail to re-apply the chart from scratch because the CRDs exist already. If you delete the CRDs, serviceMonitor and podMonitor API objects are suddenly undefined and will be deleted along with their CRD. So now ontop of creating prometheus, you need to now go ahead and reapply all of the serviceMonitor definitions which means running the other helm charts that make those. Helm was a mistake. Operators were a mistake. Kubernetes was a mistake.
|
# ? Dec 14, 2019 07:56 |
|
Methanar posted:Kubernetes was a mistake. Of course, they're buried in a 500 page draft: http://markburgess.org/treatise_vol2.pdf Methanar posted:Operators were a mistake.
|
# ? Dec 14, 2019 14:34 |
|
Whereas, I think cfengine (of which he was the author) and chef and promise theory application onto individual servers in general as fundamentally flawed and outdated ways of managing systems. But i'm not going to write a 500 page treatise about it
|
# ? Dec 14, 2019 19:20 |
|
Kubernetes good. Kubernetes users bad. Organizations bad. You, a software vendor, release an application, and the application runs reasonably well within Kubernetes despite needing to work around some pre-Kubernetes design decisions. If you are familiar with the building blocks used to create Kubernetes deployments, you can likely figure these out. Your users, however, don't know Kubernetes, and don't have the slightest idea how to start writing a manifest. They fly into a panic the instant anything goes wrong, and are paralyzed with fear of the new and confusing Kubernetes landscape. Checking application logs or making the most basic effort to interpret error messages (what does "could not find Secret foobar" mean? AN IMPOSSIBLE MYSTERY) is an insurmountable, herculean task--although they're what sysadmindevopswordsalad people have done for eons, those same tasks are now treated as inscrutable dark magic once kubectl is involved. Unfortunately, your users' management cares little about whether their reports are able to effectively manage applications in Kubernetes and has no interest in providing them with training: a big important CTO type has issued a mandate to move onto Kubernetes, and by god, management is going to demonstrate its ability to deliver on that mandate through traditional poo poo management practice of "yell at your employees to do poo poo faster". Challenges and poor planning be damned, the deadline WILL be met. Enter Helm. Helm is a fantastic, if flawed tool for expediting many things an experienced Kubernetes admin learns they need to do when managing deployments manually. Know that you'll change basically three values and leave the rest of the manifest the same? Great! You can stick those in a brief file and have them inserted at the correct location! Need to run something before starting an upgrade rollout? We got lifecycle hooks! Need to create the same three mostly identical pieces of configuration for each configmap you add? Template loops! Sadly, the aforementioned users treat Helm as an easy button. Someone already wrote most of the manifest and provided a concise set of things they probably will need to change, so it's no longer necessary, from their point of view, to understand what the rest of the manifest is doing, why it's doing it, or how to fix it when it goes wrong. Hardly any will try to modify the templates to add features their particular environment needs; everything is submitted back to the vendor to implement. Users treat Helm as another layer of abstraction, where it's actually augmenting an existing abstraction layer and doesn't excuse you from understanding that layer. Those augmentations add further complexity alongside the existing abstraction layer, so you need to know more about what you're doing to use them effectively, not less.
|
# ? Dec 14, 2019 19:56 |
|
CMYK BLYAT! posted:Kubernetes good. Kubernetes users bad. Organizations bad. actually kubernetes is bad. And the last thing I will ever want is for users to be doing is writing helm beyond copypasting in their app specific values into the standard pre-formatted helm values file. Every dev used to manage and write their own chef cookbooks to do that and that was really, really, really bad.
|
# ? Dec 14, 2019 20:02 |
|
CMYK BLYAT! posted:Kubernetes good. Kubernetes users bad. Organizations bad. If you as an operations engineer can't provide easy to use tools for product engineers to release and run their code then you've failed in your job. Kubernetes is a layer of abstraction that makes deployment and operations by users more complex than it needs to be and provides so many moving parts that touching it in the wrong way can cause cascading issues in places a user couldn't possibly imagine. Golden image worked perfectly and was extremely simple, easy to reason about failure cases, and scaled nicely. I have no idea why people needed to completely reinvent the wheel.
|
# ? Dec 14, 2019 20:19 |
|
Blinkz0rz posted:If you as an operations engineer can't provide easy to use tools for product engineers to release and run their code then you've failed in your job. These /are/ operations engineers, just operations engineers that can't be arsed to learn anything new if it requires more than three steps that require no decisions. Those aren't all of our users--some are quite capable of tuning things on their own or adding onto a stock template--but they're a nonzero fraction of our customers and good at throwing "we're paying you $BIGX so just do it the way we want" bricks through windows. Management on our end typically acquiesces because they haven't figured out whether we're a professional services bespoke software consultancy or not, and reliably choose "quick thing that makes one customer stop complaining now" versus "actually think about the problem and how a solution that works for one customer might box us into a corner down the road". I'd love for things to be simpler, I really would. Sadly, we don't have infinite time and resources to try and abstract away every possible decision that's necessary when deploying complex, rapidly-evolving software into every conceivable (and often badly-designed) network architecture.
|
# ? Dec 14, 2019 20:58 |
|
CMYK BLYAT! posted:I'd love for things to be simpler, I really would. Sadly, we don't have infinite time and resources to try and abstract away every possible decision that's necessary when deploying complex, rapidly-evolving software into every conceivable (and often badly-designed) network architecture. This is sort of the crux of my argument against Kubernetes. 99% of applications deployed in k8s don't require the complexity that k8s brings. It's resume-driven development for ops teams and it loving sucks to be on the product eng side when dealing with it.
|
# ? Dec 14, 2019 21:16 |
|
|
# ? May 16, 2024 11:36 |
|
Some things -do- require that level of complexity and tuning though. Our products, for lack of more detailed description, work at lower layer than a lot of applications deployed to Kubernetes typically do, and when deployed effectively, can help smooth complexity for other applications in the cluster. The cost, however, is that users need to understand how and why they're doing that smoothing, and how the same might apply to the thing they're smoothing with. Industry actors want an airplane that can be operated by an idiot babby to get from A to B, but have parts bins full of various engines, chassis, and fuel. They have observed that other actors have successfully built airplanes to great success, and want to build their own, and elocute a very Picardian "make it so" directive to their subordinates, who have never built an airplane before but somewhat understand how the components work from their previous experience working on trains. Somehow, however, they jettison much of their knowledge and return to basic experimentation, and try to make their fully-electric plane fly by filling it with diesel fuel, or install a engine designed to power a golf cart into an airframe. This fails, and everyone roundly concludes that airplanes are impossible and that horse-drawn carriages are the only viable transportation option.
|
# ? Dec 14, 2019 22:06 |