|
lamport is dead spanner is a distributed transactional db, paxos is key Zookeeper is basically a paxos server, lotta stuff uses it for membership/leader elections/other cluster management those are the only 2 implementations off the top of my head, turns out paxos is kind of a pita to actually implement
|
# ? Jun 3, 2013 01:38 |
|
|
# ? May 9, 2024 21:09 |
|
Nomnom Cookie posted:lamport is dead nope quote:spanner is a distributed transactional db, paxos is key that and basically running a datacentre on a clock signal quote:Zookeeper is basically a paxos server, lotta stuff uses it for membership/leader elections/other cluster management (it's built on an atomic broadcast primitive, not paxos) quote:those are the only 2 implementations off the top of my head, turns out paxos is kind of a pita to actually implement yep
|
# ? Jun 3, 2013 01:47 |
|
Lamport is still alive. The wiki article lists uses of Paxos, and I know Scalaris also uses it. And yeah, the paper tef linked to does mention it's a PITA.
|
# ? Jun 3, 2013 01:48 |
|
ah Chubby uses paxos. I think I heard ZK was similar to chubby and i guess assumed it was built on paxos too.
|
# ? Jun 3, 2013 02:03 |
|
Nomnom Cookie posted:ah Chubby uses paxos. I think I heard ZK was similar to chubby and i guess assumed it was built on paxos too. zab is only really worth mentioning because of the novelty of not being paxos. also, raft is a new consensus algorithm
|
# ? Jun 3, 2013 02:41 |
|
drat it, I had no clue what any of you fuckers were talking about in this last page and now I'm reading that paper tef posted. Looking good so far
|
# ? Jun 3, 2013 02:44 |
|
Hard NOP Life posted:drat it, I had no clue what any of you fuckers were talking about in this last page and now I'm reading that paper tef posted. Looking good so far
|
# ? Jun 3, 2013 02:46 |
|
Zab paper: http://www.stanford.edu/class/cs347/reading/zab.pdf Raft paper: https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf (E: the Zab paper makes bad assumptions regarding paxos, i.e. multiple proposers can use the same sequence id to propose a change ) MononcQc fucked around with this message at 03:12 on Jun 3, 2013 |
# ? Jun 3, 2013 03:06 |
|
jeez those guys really bag on paxos (i am one of the dummies who doesn't understand paxos)
|
# ? Jun 3, 2013 03:21 |
|
That google paper tef posted does a pretty decent job of describing it in laymans terms.
|
# ? Jun 3, 2013 03:29 |
|
i probably read it at one point...never got to where id be comfortable with implementing it. does me good to hear that high foreheads have trouble with paxos too
|
# ? Jun 3, 2013 03:37 |
|
Nomnom Cookie posted:jeez those guys really bag on paxos Paxos is basically a very, very generic algorithm that can work with unreliable transmissions (dropped messages, repeated messages, or out-of-order messages are all possible, but not corrupted messages), lack of clear leaders (masters), unbounded numbers of failures (in terms of times, but you still gotta have a quorum to progress, so it's a CP algorithm), works asynchronously, and tends to try to provide globally consistent state (what can somewhat be stated as state-machine replication), and has variants that can deal with the presence of byzantine processes (say someone owns a few insecure nodes and tries to corrupt your data). It doesn't especially care about being fast, having a high throughput, or being easy to understand or implement. It comes in multiple varieties with different tradeoffs, and it does appear to be a pain in the rear end to implement right. Things like Zab (ZooKeeper Atomic Broadcast) appear to not need to be as solid as Paxos (the protocol requires messages to be in order, and uses synchronous cycles, for example), and make different assumptions regarding causality (not all pieces of state are causal for one another). They also care more for things like speed and throughput. Raft on its own end seems to be annoyed at the basic Paxos variant, the basic one that works with all crazy kinds of failures, and is not optimized for stable clusters, but gets to be used as the foundation to prove that later optimized forms (multi-paxos) can work fine and be optimal. I'm still going through that paper, though and I can't comment on how they fix things at this time. Hard NOP Life posted:That google paper tef posted does a pretty decent job of describing it in laymans terms. It does gloss over a bunch of details in the protocol, but it gives a decent hand-wavy understanding of things to have a broad idea of what is going on with it. MononcQc fucked around with this message at 03:57 on Jun 3, 2013 |
# ? Jun 3, 2013 03:49 |
|
Nomnom Cookie posted:i probably read it at one point...never got to where id be comfortable with implementing it. does me good to hear that high foreheads have trouble with paxos too paxos is pretty much the canonical example of such a thing. a crazy complex beast that is poorly explained and badly implemented. quote:My attempt at inserting some humor into the subject was a dismal failure. People who attended my lecture remembered Indiana Jones, but not the algorithm. People reading the paper apparently got so distracted by the Greek parable that they didn't understand the algorithm. Among the people I sent the paper to, and who claimed to have read it, were Nancy Lynch, Vassos Hadzilacos, and Phil Bernstein. A couple of months later I emailed them the following question:
|
# ? Jun 3, 2013 03:58 |
|
Made some progress in Raft. It seems to be a variant of Zab where instead of explicit synchronization during discovery and synchronization phases and invariants for ordering of events, it will just say "well the leader is right, overwrite your logs guys!" They show it to be safe, but I'm very worried about a paper on this topic that contains a quote like:quote:In practice, we doubt this optimization is necessary, since failures happen infrequently and there are unlikely to be many inconsistent entries. I'm eager to see what peer review says on that one, because that's one hell of a red flag. Their division of components (leader election, log synchronization and correction, and safety guarantees) is nice, though, and the way they focus on the logs (through their log leader property) rather than the state to be synchronized is pretty interesting. Also kudos for explicitly dealing with cluster membership (and quorum size) changes.
|
# ? Jun 3, 2013 06:02 |
|
does anyone have a good overview of common data structures so I can pick one that suits my needs
|
# ? Jun 3, 2013 09:27 |
|
json is all you need tbh
|
# ? Jun 3, 2013 10:46 |
|
my stepdads beer posted:does anyone have a good overview of common data structures so I can pick one that suits my needs You should use an array.
|
# ? Jun 3, 2013 11:44 |
|
tef posted:paxos is pretty much the canonical example of such a thing. a crazy complex beast that is poorly explained and badly implemented. Does it actually offer any advantage over two phase commit with random backoff and retry on conflict?
|
# ? Jun 3, 2013 11:44 |
|
Afaik 2PC will block entirely if the transaction manager fails, and won't be able to recover if a cohort member also fails at that time. 3PC tries to fix this, but cannot deal with network partitions very well (it assumes it can accurately detect failures) and there is no formal proof about it (that I'm aware of?) Paxos guarantees it will progress as long as a quorum of participants is healthy. The Paxos paper does say it may go in a loop if you end up with multiple leaders proposing requests at the same time. It also recommends ways to use this as some root for leader election if not already in progress and have things resolve themselves.
|
# ? Jun 3, 2013 13:17 |
|
MononcQc posted:Afaik 2PC will block entirely if the transaction manager fails, and won't be able to recover if a cohort member also fails at that time. In my opinion if you have a single transaction manager with 2PC you're doing it wrong. If you make transaction management part of the client you don't have the single point of failure. You do need to buttress 2PC with recovery strategies for nodes that go down but you need that for nearly everything.
|
# ? Jun 3, 2013 13:33 |
|
Even with multiple transaction managers, don't you end up in the possible case where, as a cohort member, you block because the transaction manager for whatever current transaction you were working on disappears? Is there an explicit part of the protocol that allows an individual member to give up while waiting for either a commit or abort message without breaking any of the promises made by the protocol?
|
# ? Jun 3, 2013 13:51 |
|
my stepdads beer posted:does anyone have a good overview of common data structures so I can pick one that suits my needs what are your needs?
|
# ? Jun 3, 2013 14:11 |
|
MononcQc posted:Even with multiple transaction managers, don't you end up in the possible case where, as a cohort member, you block because the transaction manager for whatever current transaction you were working on disappears? Is there an explicit part of the protocol that allows an individual member to give up while waiting for either a commit or abort message without breaking any of the promises made by the protocol? There's no explicit part, but that's where the buttressing comes in, basically have something else reboot and recover the machine that's borked. That way only machines that are known good are responding to queries at any given time. Paxos would still need this sort of machine management going on, and I'm not sure it simplifies any of it.
|
# ? Jun 3, 2013 14:33 |
|
Paxos doesn't need it. It's part of how the algorithm works. The incrementing and non-overlapping ballot Ids, asynchronous work, and requirements to merge concurrent requests values into a single one (indirectly, based on higher ballot Ids and the behaviour of proposers) make this implicit / not necessary as far as I understand. This would also be the part of protocol that may hinder or slow progress unless you have a unique leader elected.
|
# ? Jun 3, 2013 14:51 |
|
MononcQc posted:Paxos doesn't need it. It's part of how the algorithm works. The incrementing and non-overlapping ballot Ids, asynchronous work, and requirements to merge concurrent requests values into a single one (indirectly, based on higher ballot Ids and the behaviour of proposers) make this implicit / not necessary as far as I understand. This would also be the part of protocol that may hinder or slow progress unless you have a unique leader elected. No algorithm can handle machine failure. You still need to do a bunch of stuff to get your cluster healthy again.
|
# ? Jun 3, 2013 15:35 |
|
I see what you mean. This mechanism isn't vital to the algorithm though. The case here is that Paxos doesn't care what the source of the failure is -- a timeout, a partition in the network, a hardware failure, lost messages, lack of ordering, etc. They can all be seen as the same instance of "not being responsive fast enough" with regards to the algorithm. There is also a question of how many failures can be dealt with at once. So yes in production you'll probably want something that restarts software, reboots servers, or allows you to add or remove them, because otherwise you'll slowly end up with not enough machines to keep going. But you won't fall in the case where, say, the connection between servers falls in a bad state and a 3 hours timeout resolves it later on and all of a sudden one end thinks the other was crashed but the other just thought things were slow. To figure out that a machine is borked in the first place is a tricky thing and especially why 3PC is apparently unreliable. If your algorithm or system requires to know the difference between high latency, crashes, partitions, and/or dropped/repeated/out-of-order messages, it is inherently less safe than Paxos on this point of view because it doesn't care which it is or how many it is, as long as a quorum can be obtained. (also it will not block) E: that is assuming a well-implemented Paxos, which is, as discussed earlier, not a trivial thing. MononcQc fucked around with this message at 15:51 on Jun 3, 2013 |
# ? Jun 3, 2013 15:46 |
|
Bring back circuit switching and solve the Byzantine generals problem thing Alternately: make every computer part of a single globe-spanning VAXcluster
|
# ? Jun 3, 2013 16:45 |
|
PLs(no i wont go to the prole thread, suck my dilz): Debugging PHP. Honeymoon is officially over thats it
|
# ? Jun 3, 2013 17:29 |
|
the difference between a real language and a toy language becomes immediately apparent the moment you have to debug something
|
# ? Jun 3, 2013 18:07 |
|
i still have never written a line of php in my life no wonder im poor
|
# ? Jun 3, 2013 18:08 |
|
aah but you are so much richer in spirit
|
# ? Jun 3, 2013 18:14 |
|
I tried to write some erlang for the first time in several weeks today and it did not go well. I really need a good outlet so o can write it once every couple of days. Same with speaking french
|
# ? Jun 3, 2013 18:33 |
|
same but german and lisp
|
# ? Jun 3, 2013 18:47 |
|
Mr Dog posted:the difference between a real language and a toy language becomes immediately apparent the moment you have to debug something a toy language: a language where the only substantial program written in it, is the compiler.
|
# ? Jun 3, 2013 19:49 |
|
php's compiler is written in c so i guess its not a toy language a garbage pisstrash language but not a toy language
|
# ? Jun 3, 2013 20:31 |
|
php compiler??
|
# ? Jun 3, 2013 20:47 |
|
Suspicious Dish posted:php compiler?? most of your modern scripting languages are actually compiled into bytecode before they're executed
|
# ? Jun 3, 2013 21:27 |
|
prefect posted:most of your modern scripting languages are actually compiled into bytecode before they're executed i didnt know php had bytecode. now i do. http://blog.golemon.com/2006/06/how-long-is-piece-of-string.html and its p bad
|
# ? Jun 3, 2013 21:38 |
|
the best thing is that you can get a significant speed increase by storing the compiled bytecode. except that some language constructs will change the bytecode people bitch about .pyc files but php is literally incapable of producing them
|
# ? Jun 3, 2013 21:43 |
|
|
# ? May 9, 2024 21:09 |
|
thats so that you can pay $50 a month for Zend Optimizer+ which spits out .phpc files
|
# ? Jun 3, 2013 21:56 |