|
MononcQc posted:given a queue is often a central point of failure, there is on average a higher probability that the queue is down as opposed to a majority of the servers or workers handling the requests. then don't use a queue. its not helping you.
|
# ? Mar 21, 2018 20:07 |
|
|
# ? Jun 3, 2024 01:41 |
|
Shaggar posted:you cant have backpressure and queues. they're incompatible. either you have important data that needs to be queued in which case you can never under and circumstance tell the clients to gently caress off or you don't have important data and you can tell your clients to gently caress off at which point you should not be using a queue because it does not help you. just have the clients talk directly to the processor that the queue is feeding. A queue that is out of memory is either going to refuse new input, drop existing ones to make place, or crash, which will unfortunately do both at once. A queue for which you have no mechanism to deal with overload is the worst of both worlds. Lack of decision means you lose data and stop being able to write. I don't care if your YOSPOS trigger-happy shaggarific solution is "fire the dude who made a mistake", the mistake will be made anyway and you're better prepared if you consider that humans operate your systems and that humans will make mistakes and that no amount of preset procedures will prevent that. The backpressure your queue implements might result in a few requests being dropped, and then the system discovering that and going offline ASAP to prevent people from trying to make requests that can never be fulfilled, for example. This is a thousand times better than your lovely "lol just dont ever crash" joke of a solution. If imperfect operators are not part of your system plan, then your system plan is bad and you are the one to blame for the failures encountered.
|
# ? Mar 21, 2018 20:13 |
|
although scratch that, no software developer is never responsible for anything this is not real engineering
|
# ? Mar 21, 2018 20:15 |
|
NihilCredo posted:my favourite part about amazon sqs is the amazon techies completely dodging my questions re: 'is there any issue with you can end your question right there they’re probably been told answering an explicit “is there an issue” question can incur some sort of liability try rephrasing your question to not imply they may be at fault for something
|
# ? Mar 21, 2018 20:36 |
|
Soricidus posted:of course I still don’t know of anyone who’s managed to switch to java 9 yet, let alone considering 10. we’re certainly stuck on 8 for the time being. that compatibility break was pretty bad. where is the compatibility break?
|
# ? Mar 21, 2018 20:59 |
|
Shaggar posted:you cant have backpressure and queues. they're incompatible. either you have important data that needs to be queued in which case you can never under and circumstance tell the clients to gently caress off or you don't have important data and you can tell your clients to gently caress off at which point you should not be using a queue because it does not help you. just have the clients talk directly to the processor that the queue is feeding. under the right circumstances, a queue makes an excellent buffer there are at least two reasons you might want a bigass buffer
|
# ? Mar 21, 2018 21:00 |
|
MononcQc posted:A queue that is out of memory is either going to refuse new input, drop existing ones to make place, or crash, which will unfortunately do both at once. the idea that the queue can understand that its overloaded and disable itself is a joke. its going to either be crashed or its going to be working. if it could detect that its down you'd just restart it. I only have one significant queue (and its in a db) and its never logically down because its existence is tied in with the clients that use it. they cant do work that generates items for the queue unless the entire system is up. the database is technically a single point of failure, but it also guarantees I never lose data like you are suggesting I should. also the way I deal with overload is expanding the number of workers processing the output. ignoring the client input is retarded unless your data doesn't matter in which case you don't need a queue.
|
# ? Mar 21, 2018 21:08 |
|
Notorious b.s.d. posted:under the right circumstances, a queue makes an excellent buffer right your buffer is there to ensure you never have to reject incoming items.
|
# ? Mar 21, 2018 21:10 |
|
MononcQc posted:although scratch that, no software developer is never responsible for anything this is not real engineering this is fine because most software doesn't actually matter
|
# ? Mar 21, 2018 21:13 |
|
One normally communicates with a database via a queue, it's a bit odd to proceed and say queues are bad. For certified messaging you always need a permanent store such as a database, the larger ones also have built in transaction managers to enable complex messaging patterns. One goal of a queue is to improve performance by removing synchronous stepping, but that is looking at the smaller picture. In situations like back and middle office banking messages need to flow through multiple systems and queuing is the tool to power the decoupling of these ridiculously complex, expensive, independently managed and developed monsters. Opinions from working at teh information bus company.
|
# ? Mar 21, 2018 21:15 |
|
Shaggar posted:the idea that the queue can understand that its overloaded and disable itself is a joke. its going to either be crashed or its going to be working. if it could detect that its down you'd just restart it. I only have one significant queue (and its in a db) and its never logically down because its existence is tied in with the clients that use it. they cant do work that generates items for the queue unless the entire system is up. Your database queue implements exactly what I'm talking about : if it is in no position to work, then the clients go down as well and are in no position to insert data into it. This means that the queue outage is propagated to the clients who stop all input, and that the data should not be lost in the existing queue. That's good. That's exactly the kind of backpressure propagation I was talking about. Now if you start thinking a bit more, you can imagine something like clients that have circuit-breakers such that if their queue is decoupled from the DB (or stored over multiple DBs) and it falls over or stops being responsive (because it got a 30s GC pause or some poo poo that happens in Cassandra in the real world for one example), then the client knows the queue or storage engine cannot take input right now, and they can do something like return an error message until they detect things to be working again. This also lets you do partial fault handling where a queue being down does not necessarily equals all queues being down, or doing things like retrying to store data in failover or standby instances, which gives you better uptime than just restarting that one central queue or DB. Or they just bail out and do the basic thing, that's okay too. If you flip it around and the queue software has a high watermark when it starts being at risk for stability (say, by detecting memory pressure and/or entry count) then the queue system can preemptively return the error code to people trying to insert data into it, and the consumers can keep working! This means that instead of crashing and needing long recovery cycles, you do flow control with your backpressure mechanism, ensuring more throughput overall than what you'd get through violent crash/restart cycles. MononcQc fucked around with this message at 21:19 on Mar 21, 2018 |
# ? Mar 21, 2018 21:16 |
|
Notorious b.s.d. posted:under the right circumstances, a queue makes an excellent buffer sure, but you should handle both of these with an internal queue. you should never just expose a queue for clients to blindly write into
|
# ? Mar 21, 2018 21:17 |
|
if your data is important enough to preserve then it's important enough for its origin to hang on to it until your service is ready to accept. even an limited-functionality embedded sensor of some kind talking to a big beefy cloud server can't rely on an un-disrupted link to that cloud server at all times.
|
# ? Mar 21, 2018 21:45 |
|
extending the queue down into the client is a bad idea especially since the client is generally the least reliable part of the system
|
# ? Mar 21, 2018 21:49 |
|
Sapozhnik posted:if your data is important enough to preserve then it's important enough for its origin to hang on to it until your service is ready to accept. holdup circuit calculation for "the entirety of data ive collected since boot"
|
# ? Mar 21, 2018 22:02 |
|
the talent deficit posted:sure, but you should handle both of these with an internal queue. you should never just expose a queue for clients to blindly write into the clients are a swarm of individually unreliable components the queue is Kafka — it’s purpose in life is to separate the high integrity, fault tolerant buffer from your application all of the producers and consumers can rely on the queue to manage state and integrity for messages in flight, so that complexity isn’t built into the apps
|
# ? Mar 21, 2018 22:05 |
|
Sapozhnik posted:if your data is important enough to preserve then it's important enough for its origin to hang on to it until your service is ready to accept. a remote sensor on the far end of an unreliable link is probably not a great place for an enterprise service bus or Kafka queue or really anything.. that is a bad and weird place to be, and the normal solutions are not applicable
|
# ? Mar 21, 2018 22:07 |
|
Soricidus posted:queue queue more
|
# ? Mar 21, 2018 22:15 |
|
Shaggar posted:extending the queue down into the client is a bad idea especially since the client is generally the least reliable part of the system the queue remains as it was, you can just try to make the clients a bit smarter to cope with the inevitable.
|
# ? Mar 21, 2018 22:21 |
|
eschaton posted:you can end your question right there thanks for the tip my actual phrasing was roughly: "1) is there a recommended usage pattern for querying and purging a sqs (eg "please purge it at least once a minute") and 2) what sort of things might happen if we deviate from such a pattern?" relevant, we didn't set up the queue for ourselves, we were working with a service for amazon merchants NihilCredo fucked around with this message at 23:02 on Mar 21, 2018 |
# ? Mar 21, 2018 22:59 |
|
MononcQc posted:the queue remains as it was, you can just try to make the clients a bit smarter to cope with the inevitable. you could set up a queue inside each client, then those queues could feed data into the server-side queue
|
# ? Mar 22, 2018 00:24 |
|
um so yes the persistant durable replicated queue that one the one we're using how can we purge it every minute
|
# ? Mar 22, 2018 01:36 |
|
|
# ? Mar 22, 2018 02:19 |
|
post your biggest queues
|
# ? Mar 22, 2018 02:37 |
|
a few hundred a second through zeroMQ is my personal best
|
# ? Mar 22, 2018 02:40 |
|
10mb per message
|
# ? Mar 22, 2018 02:43 |
|
worked with a guy who didn’t understand topics and instead created thousands of queues, one per “entity”, each with max one message at a time
|
# ? Mar 22, 2018 02:46 |
|
eschaton posted:you can end your question right there aws can also be quite cagey on discussing implementation details that are not desired and will (in the fullness of time!) be changed
|
# ? Mar 22, 2018 04:00 |
|
rt4 posted:you could set up a queue inside each client, then those queues could feed data into the server-side queue if you get reliable clients then yes definitely and then you can get rid of the server-side queue entirely and replace it by a load-balancer and get the same result but with a horizontally scalable component instead of a single point of failure.
|
# ? Mar 22, 2018 04:04 |
|
has monococqc not linked https://ferd.ca/queues-don-t-fix-overload.html because if he hasn't he should link https://ferd.ca/queues-don-t-fix-overload.htmlMononcQc posted:if you get reliable clients then yes definitely and then you can get rid of the server-side queue entirely and replace it by a load-balancer and get the same result but with a horizontally scalable component instead of a single point of failure. or more abstractly: in the same way that you shouldn't handle every exception at the lowest possible level in your system (because only the higher-level callers know what's right to do), you should also push the decision about the importance, necessity, and lifetime of a particular piece of work in your system out towards the services that have the most context FamDav fucked around with this message at 04:47 on Mar 22, 2018 |
# ? Mar 22, 2018 04:06 |
|
Soricidus posted:yes, one of the big concepts behind java 9 modules was to make bundling a jvm less terrible by letting you only bundle the parts of it that your software uses is jigsaw even complete? there was all that drama and the release date of java 9 getting pushed back several times, i don't know how it shook out
|
# ? Mar 22, 2018 04:46 |
|
quote:monococqc while we’re butchering the username, whats the story behind it? if you can say, I don’t mean you to reveal sensitive info or anything. I parse it as "my uncle quebec" but I suspect that’s a bad guess
|
# ? Mar 22, 2018 05:47 |
|
it’s quebecois for “echidna penis”
|
# ? Mar 22, 2018 05:53 |
|
eschaton posted:it’s quebecois for “echidna penis” please. the canonical name is "echopenis"
|
# ? Mar 22, 2018 05:57 |
|
cuz echi's dick just be out in the wilderness shouting "hellooooo hellohellohello?"
|
# ? Mar 22, 2018 05:59 |
|
keys to yospos stardom:
|
# ? Mar 22, 2018 06:20 |
|
Shaggar posted:extending the queue down into the client is a bad idea especially since the client is generally the least reliable part of the system That would be the user
|
# ? Mar 22, 2018 06:23 |
|
tef posted:um so yes yeah that was pretty much my reaction too the first time it happened well, no, it was 'surely i must have made some really amateurish mistake in reading the docs, it must have been spelled out somewhere that it needs to be purged' i really wish i could show you the emails from the yoga-enthusiast amazon business manager repeatedly begging us 'uhm, the queue hasn't been purged for 3 days and it's causing us ~~~problems~~~, can you pleeease clean it?'
|
# ? Mar 22, 2018 10:30 |
|
Notorious b.s.d. posted:where is the compatibility break? before modules there was no way to hide internal apis, you could only ask nicely. and the public apis are usually inadequate. so many libraries rely on internal apis, and our code relies on those libraries, and now it doesn’t run at all on java 9. the end result is that you have a choice between supporting java 8 only, java 9 only, or doing significantly more work to maintain two versions of your code. that’s a non starter and java 8 works fine so we’re sticking with that until all the third party libraries get updated or replacements emerge.
|
# ? Mar 22, 2018 11:32 |
|
|
# ? Jun 3, 2024 01:41 |
|
pokeyman posted:while we’re butchering the username, whats the story behind it? if you can say, I don’t mean you to reveal sensitive info or anything. I parse it as "my uncle quebec" but I suspect that’s a bad guess that’s the correct reading. ‘Mononc’ is a joual deformation / contraction of ‘mon oncle’, as used by children to just directly mean ‘uncle’. I picked it as a teen because back then: I was a bit too patriotic for no reason, the Qc bit would probably be support my apologies when being bad at English, I thought it would be funny to be a kid named ‘uncle’, the name was always free everywhere, and I really liked (and still do) the artist ‘Mononc Serge’ I mostly stuck with it because the name was still free everywhere so I never needed to adapt it. When I started getting more technical accounts, ‘ferd’ turned out to often be free so I started using that one in a bunch of places as well.
|
# ? Mar 22, 2018 12:19 |