http://rapgenius.com/James-somers-herokus-ugly-secret-lyrics
|
|
# ? Feb 14, 2013 01:24 |
|
|
# ? Jun 11, 2024 14:55 |
|
the latest how!! thread was neat because it taught me that finding whether a python object is callable or not is halting-problem-equivalent probably python is my favorite language, it has so many hidden gems
|
# ? Feb 14, 2013 02:12 |
|
wow someone is butthurt *makes simulation which assumes global knowledge of all dymo states* *ignores overhead of collating said information* on the other hand, if heroku did random distribution a little better than 'pick one at random' to 'pick two at random, use least overloaded' http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf this wouldn't be an issue.
|
# ? Feb 14, 2013 03:11 |
|
tef posted:wow someone is butthurt *makes simulation which assumes global knowledge of all dymo states* *ignores overhead of collating said information* The paper only works for sequential dispatching and has special clauses for parallel one, but doesn't go into details. As far as I understand, you need to have information on each of the bins, and information on all the queues shared. The problem with Heroku is that it (likely) breaks machine barriers and getting all the information across (likely dozens) of servers to be consistent in any way is going to be a nightmare. I'd not be surprised if this paper doesn't apply very well to that scale, the parallelism level and the latencies involved. You could probably solve the problem by going back to a queue-based model, but where you have N queues to dispatch randomly to, and each queue handles M instances sequentially. Then the complexity is probably shifted to how to get to know how many queues to use, and how they should know what dynos to serve. With enough queues, you can do sequential dispatch and get something rather reasonable. IMO the biggest problem is ruby people having apps where the server can handle one request at a time and expecting a third party to fix that terrible design. If you allow a few parallel requests per machine, random dispatching is way less likely to gently caress up, and concurrency will be able to compensate for the occasional slow requests that will hit you up to N-1 of them, assuming you have N concurrent processes running on each dyno.
|
# ? Feb 14, 2013 03:30 |
|
MononcQc posted:The paper only works for sequential dispatching and has special clauses for parallel one, but doesn't go into details. As far as I understand, you need to have information on each of the bins, and information on all the queues shared. The problem with Heroku is that it (likely) breaks machine barriers and getting all the information across (likely dozens) of servers to be consistent in any way is going to be a nightmare. I guess i'm saying that in the absence of global information, local information can help with random choice. For ex: dynos could push back premptively, or return load information in responses, to avoid hotspots. quote:IMO the biggest problem is ruby people having apps where the server can handle one request at a time and expecting a third party to fix that terrible design. yep quote:If you allow a few parallel requests per machine, random dispatching is way less likely to gently caress up, and concurrency will be able to compensate for the occasional slow requests that will hit you up to N-1 of them, assuming you have N concurrent processes running on each dyno. i believe this was the rationale for using random dispatch over omniscient dispatch
|
# ? Feb 14, 2013 03:47 |
|
also the waah waah we aren't the prodigal son any more bit was hilarious
|
# ? Feb 14, 2013 03:48 |
|
web "developers" of rapegenius flabbergasted by the slow performance of ruby....
|
# ? Feb 14, 2013 03:50 |
|
tef posted:I guess i'm saying that in the absence of global information, local information can help with random choice. For ex: dynos could push back premptively, or return load information in responses, to avoid hotspots. I think you would need to know how many requests are currently being handled and how many can be handled. When there's a tie, then you could base yourself on load to figure it out, but that's ultimately application dependent. tef posted:i believe this was the rationale for using random dispatch over omniscient dispatch "...and I would have gotten away with it, too, if it hadn't been for you meddling rubyists with one request per server!" MononcQc fucked around with this message at 03:56 on Feb 14, 2013 |
# ? Feb 14, 2013 03:53 |
|
Shaggar posted:web "developers" of rapegenius flabbergasted by the slow performance of ruby....
|
# ? Feb 14, 2013 03:54 |
|
quote:It ain’t where you’re from, it’s where you’re at” looooooooooooooooollllll
|
# ? Feb 14, 2013 03:54 |
|
MononcQc posted:It could possibly help. The problem is that 'load' is not a good way to measure anything when you may have a request that sits idle waiting for a long DB request on a server that can only accept one at a time. I mean load in a handwavey 'a number that means bigger = longer latency to serve request' quote:"...and I would have gotten away with it, too, if it hadn't been for you meddling rubyists with one request per server!" yessssssssss
|
# ? Feb 14, 2013 03:56 |
|
idiot retard baby hosting company too stupid for moron web "developing" scrublords lmao they deserve each other
|
# ? Feb 14, 2013 03:57 |
|
quote:The unfortunate conclusion being that Heroku is not appropriate for any Rails app that’s more than a toy. seriously cant stop laughing at this
|
# ? Feb 14, 2013 03:58 |
|
PleasingFungus posted:the latest how!! thread was neat because it taught me that finding whether a python object is callable or not is halting-problem-equivalent so long as the only way to tell if an object is callable is by executing __call__ and __call__ can be an arbitrary function then yes any solution to that problem would also have to solve the halting problem. i wouldnt think of it as a deficiency, maybe
|
# ? Feb 14, 2013 03:59 |
|
Shaggar posted:looooooooooooooooollllll this is like when people write terrible java code and blame the jvm for speed.
|
# ? Feb 14, 2013 04:00 |
|
quote:If you have 2 unicorn servers and you happen to get 3 slow requests routed to it, you are still screwed! Unicorn will indeed increase performance, but not in any fundamental way aaahhhhhh
|
# ? Feb 14, 2013 04:00 |
|
tef posted:this is like when people write terrible java code and blame the jvm for speed. nah, jvm:heroku doesn't work cause the jvm rocks. heroku is inherently flawed since its a fad-lang hosting system.
|
# ? Feb 14, 2013 04:03 |
|
also perl8.org tee hee
|
# ? Feb 14, 2013 04:03 |
|
Shaggar posted:nah, jvm:heroku doesn't work cause the jvm rocks. heroku is inherently flawed since its a fad-lang hosting system. Heroku can host Java
|
# ? Feb 14, 2013 04:05 |
|
sure, but idk why any real person would pick a hosting company with a design so flawed that it cant figure out how to route requests for one language without breaking others. that's pretty awful. there are better, cheaper java hosting alternatives so idk why you'd pick heroku unless ur a fad-langer trying to break free of their shackles.
|
# ? Feb 14, 2013 04:09 |
|
so why can't dynos like register themselves with the router mesh and that way the router can shoot requests off to available dynos
|
# ? Feb 14, 2013 04:13 |
|
i still don't see how doing the job of a tiny phone switch is A Hard Problem
|
# ? Feb 14, 2013 04:13 |
|
FamDav posted:so why can't dynos like register themselves with the router mesh and that way the router can shoot requests off to available dynos
|
# ? Feb 14, 2013 04:16 |
|
Nomnom Cookie posted:i still don't see how doing the job of a tiny phone switch is A Hard Problem it's not, there are millions of load balancers out there doing round robin instead of random load balancing it just doesn't work for heroku because their 'routing mesh' handles requests for all, like, two million applications. I imagine they do some sharding.
|
# ? Feb 14, 2013 04:41 |
|
you know maybe those guys wouldn't have such high request volume to their rails instances if they stopped using them for static content
|
# ? Feb 14, 2013 04:42 |
|
Nevergirls posted:it's not, there are millions of load balancers out there doing round robin instead of random load balancing so heroku is a bunch of dipshits. thats what i thought
|
# ? Feb 14, 2013 04:43 |
|
we spend 20000 whole us dollars on this hosting and my unicorns cant get to the rubys??? this is bullshit
|
# ? Feb 14, 2013 04:46 |
|
Nomnom Cookie posted:so heroku is a bunch of dipshits. thats what i thought the joke is that everyone involved is retarded and they're the pinnacle of modern web "development"
|
# ? Feb 14, 2013 04:46 |
Shaggar posted:the joke is that everyone involved is retarded and they're the pinnacle of modern web "development" the best part is that matz literally works for them
|
|
# ? Feb 14, 2013 04:52 |
|
Shaggar posted:we spend 20000 whole us dollars on this hosting and my unicorns cant get to the rubys??? this is bullshit
|
# ? Feb 14, 2013 04:53 |
|
web development makes a lot more sense if you understand scaling to mean "being not poo poo" these guys thought heroku would make them not poo poo for $20,000/mo and it didn't happen and they're pissed
|
# ? Feb 14, 2013 04:56 |
|
Nevergirls posted:it's not, there are millions of load balancers out there doing round robin instead of random load balancing i mean circuit switching not RR
|
# ? Feb 14, 2013 04:57 |
|
Nevergirls posted:I realize you're shaggaring here and whimsical ruby naming schemes are deplorable but rapgenius can't be using unicorn because it actually does handle concurrent requests they mentioned unicorns in the comments so I went w/ it.
|
# ? Feb 14, 2013 05:06 |
|
so wait am i right that this isnt an incredibly difficult problem or did i get shagged
|
# ? Feb 14, 2013 05:55 |
|
FamDav posted:so wait am i right that this isnt an incredibly difficult problem or did i get shagged yep, as far as I can see it: rappers use a rails http server with a global lock and no green threads. can only handle one connection at a time allegedly, heroku mitigated this by magic balancing, as to pick least congested dymo heroku supports non broken http servers, so switches to random load balancing, as god intented. amateur rails dev kicks up fuss because heroku won't break every other app to make their broken code look fast. tef fucked around with this message at 06:18 on Feb 14, 2013 |
# ? Feb 14, 2013 06:07 |
|
gucci void main posted:literally
|
# ? Feb 14, 2013 06:15 |
|
there are really 3 problems 1) ruby/rails sucks 2) herokus solution to fixing ruby/rails sucking sucks 3) web "developer" sucks for picking ruby/rails and blames heroku for not fixing ruby/rails sucking
|
# ? Feb 14, 2013 06:19 |
|
Pie Colony posted:if you still don't know you can get all the latest how posts here I just got off a short contract where this exact thing happened. The company had a MVP they were using that was made by the founders. They were obviously young and inexperienced. The code was awful to say the least. I told them straight up it would be best to just throw all this code away and start again from scratch. Its best to do it now while the company is young, rather than much later. They were neck deep in technical debt. They claimed it would take "a really long time" to rewrite everything. I can understand a junior developer thinking that (it apparently took them 2 years to create the code they were using at that point). But using good engineering practices, building a site that did what they needed could have been done within a few weeks using Django and MySQL. Its really hard to make this point stick when you're dealing with developers in that "sophomore zone" of programming ability. Needless to say, they told me "no way jose". I ended it with them right then and there. I can't tell you how many times I've seen that happen.
|
# ? Feb 14, 2013 06:43 |
|
Nevergirls posted:it's not, there are millions of load balancers out there doing round robin instead of random load balancing they can't really shard because any given mesh member doesn't know which app any given request is for until it sees the host request header there's some internal service that maps host names to running dynos, or can tell the router to hold the request for a dynos that's booting how do you round robin when any given mesh member might not know what dynos have been used the last few ms? in a mesh of 100 nodes? in a mesh of 1000 nodes? in a mesh of tens of thousands of nodes?
|
# ? Feb 14, 2013 06:57 |
|
|
# ? Jun 11, 2024 14:55 |
|
FamDav posted:so why can't dynos like register themselves with the router mesh and that way the router can shoot requests off to available dynos like, register when they handle a request? the mesh could figure it out because the mesh knows when the response is done, but chatter between different mesh members doesn't work at scale (you'd have to tell every mesh member about every response in the system)
|
# ? Feb 14, 2013 07:00 |