|
admittedly there are cases when it doesnt matter so gently caress it like who cares if you read the xml wrong and a dude gets the wrong kind of porn or w/e, for example
|
# ? Nov 14, 2014 18:54 |
|
|
# ? Jun 11, 2024 09:50 |
|
the janitor is a garbage collector and i dont care how or when he takes out the trash as long as he does it regularly enough that the place isnt overflowing with the stuff
|
# ? Nov 14, 2014 18:56 |
|
Arcsech posted:admittedly there are cases when it doesnt matter so gently caress it yeah i can see it not mattering if the software you work on doesnt matter good luck pulling that w/ medical devices though, lmfao
|
# ? Nov 14, 2014 18:57 |
|
hackbunny posted:what's inside a clang member function pointer? how big are they, what are the hidden fields? is it documented anywhere, like does it come from that common ABI shared by clang and gcc? yes, we follow the common abi. note that arm tweaked this abi for their platforms: the is-virtual bit is the low bit of the this-adjustment, because arm uses the low bit of a function pointer to encode arm vs. thumb hackbunny posted:rjmccall, does this mean Windows-compatible exceptions are coming to llvm like ever? we're actually thinking through how to implement this right now. see reid kleickner's recent posts to llvmdev hackbunny posted:I think I promised I'd look into it myself like a year ago, but then I looked at the clang sources and it seemed incredibly complicated and way beyond my skills. my understanding is that Windows-compatible exceptions will require changes even to llvm, because llvm assumes Unwind ABI semantics it shouldn't require completely different ir. it will require completely different backend behavior because we'll need to outline functions; i told reid he needed to take this to llvmdev before we could consider it in the frontend seh filter functions (where you can use an arbitrary expression in local context to decide whether you're handling an seh exception) might require a tweak to the representation hackbunny posted:should I bother trying to make sense of it some day, or do I just sit back and let the big boys go at it? you'll need to learn a lot of stuff about llvm internals if you want to help with this one, but you're welcome to
|
# ? Nov 14, 2014 19:12 |
|
suffix posted:https://pachyderm-io.github.io/pfs/ Here's some more review of where it technologically is right now and maybe where it's heading. Right now it's nowhere. It's a distributed file system, right? It uses raft somehow, right? There's an API for adding files and making "commits". The way it is, there's no transactions, no notion of multiple clients. You can only have on client at a time, because otherwise, one client's "commit" will include some of the changes created by another client. So you have a distributed system whose real feature of making a commit can only be used by one client at a time. (Which machine gets to be that client and how do you failover when that machine dies?) Multiple machines could still upload files or something. Or they could make commits willy-nilly and know "my file exists as of commit C" as long as it knows nobody else is going around deleting files they haven't heard of. It uses etcd (which uses raft) but it only does so for managing the question of which machines are the masters and slaves ("replicas") for different shards. The code figures out what the master is and sends the read/write there. The master then figures out what the replica's hostname is (but there's only one replica right now) and sends the read/write there, too. In the case of a netsplit, one side would just send its writes to what the old master was, also sitting on the same side of the netsplit. The other side could change the master and have writes on its side. Eventually one side will lose its writes. In other words, the "log" being replicated by etcd's raft implementation, as used by pfs, is the log of who the masters and replicas are, not the log of what writes are happening. Each node will have a clear picture of the history of what the masters for each shard were, but still will have thrown writes away at the wrong masters because they received that information at different times. I suspect the second problem will persist. The first might last a while, but it's too obvious to last forever, if development continues. Right now the project is still in the "hard coded IP addresses, extraneous print statements" stage. You can't change how many shards there are or even, as far as I looked, change what the set of masters and replicas are.
|
# ? Nov 14, 2014 19:24 |
|
we use btrfs in production, but we have something of a privileged position there
|
# ? Nov 14, 2014 19:51 |
|
sarehu posted:It uses etcd (which uses raft) but it only does so for managing the question of which machines are the masters and slaves ("replicas") for different shards. The code figures out what the master is and sends the read/write there. The master then figures out what the replica's hostname is (but there's only one replica right now) and sends the read/write there, too. In the case of a netsplit, one side would just send its writes to what the old master was, also sitting on the same side of the netsplit. The other side could change the master and have writes on its side. Eventually one side will lose its writes. In other words, the "log" being replicated by etcd's raft implementation, as used by pfs, is the log of who the masters and replicas are, not the log of what writes are happening. Each node will have a clear picture of the history of what the masters for each shard were, but still will have thrown writes away at the wrong masters because they received that information at different times. So uh, that's using a quorum system, but using one node for leader and one node as a follower? That thing then can only work without any single node failing, otherwise it fails to reach a majority. That's uh, more or less gaining nothing from using that algorithm, because a single master/slave setup would yield the same properties, but without having to figure out who leads and who doesn't? It would also mean (for the future) that if one of the two nodes go away forever, you do not have the possibility to ever create the quorum required to redefine what the quorum value is or modify the set of active nodes. Meaning short of taking the system entirely down and changing values by hand, you bricked your 2-node cluster. That is, giving up a lot of fault tolerance. If there's not a third node entering the picture soon, there's pretty much nothing gained.
|
# ? Nov 14, 2014 20:05 |
|
MononcQc posted:Fun example to try with your favorite XML parser. java.net.MalformedURLException: no protocol: <vxml xmlns="http://www.w3.org/2001/vxml"xmlns:xsi="http://www.w3.org/2001/XMLSchema-
|
# ? Nov 14, 2014 20:11 |
|
MononcQc posted:So uh, that's using a quorum system, but using one node for leader and one node as a follower? That thing then can only work without any single node failing, otherwise it fails to reach a majority. That's uh, more or less gaining nothing from using that algorithm, because a single master/slave setup would yield the same properties, but without having to figure out who leads and who doesn't? If there are three shards (i.e. "buckets"), each with a master and replica, and let's say they're on six different machines, then you'd have 6 machines voting, because the assignments of the whole cluster is defined by a single etcd thingy. I think there being only one master and one replica per shard is a temporary thing because the dude didn't feel like writing a for loop.
|
# ? Nov 14, 2014 20:15 |
|
Subjunctive posted:we use btrfs in production, but we have something of a privileged position there Yeah, you use it in a read-only mode where if something gets corrupted you re-flash and get on with your day. Because btrfs is so broken that simply trying to write a file will cause your system to keel over.
|
# ? Nov 14, 2014 20:17 |
|
sarehu posted:If there are three shards (i.e. "buckets"), each with a master and replica, and let's say they're on six different machines, then you'd have 6 machines voting, because the assignments of the whole cluster is defined by a single etcd thingy. That is far less scary then, yeah.
|
# ? Nov 14, 2014 20:41 |
|
MononcQc posted:It's loving dumb that when people fix their idiot protocols, they just keep piling on bullshit rather than properly versioning them up otoh, oauth 2
|
# ? Nov 14, 2014 20:48 |
|
Suspicious Dish posted:Yeah, you use it in a read-only mode where if something gets corrupted you re-flash and get on with your day. that's not actually the case, but we also employ the lead developers, which helps
|
# ? Nov 14, 2014 20:50 |
|
Notorious b.s.d. posted:1.0 was published in '98 Why deal with revisions when you already have versioning?
|
# ? Nov 14, 2014 20:52 |
|
KARMA! posted:
we have a windowing system, let's call it 'W', that's kinda cute 'W' needs some revisions, what's the successor? 'X' of course X needs some revisions, what's the successor? X2 of course X11 needs some revisions, what's the successor? X11R2 of course X11R6 needs some revisions, what's the successor? X11R6.1 of course
|
# ? Nov 14, 2014 21:03 |
|
KARMA! posted:
semantic versioning just like software, you have to make bugfixes to otherwise-stable standards. if it makes you happier, think of XML 1.0, 5th edition as XML version 1.0.5
|
# ? Nov 14, 2014 21:13 |
|
Blotto Skorzany posted:we have a windowing system, let's call it 'W', that's kinda cute W and X were incompatible X10 and X11 were largely source compatible but not entirely X11R1 through X11R7 should be entirely compatible
|
# ? Nov 14, 2014 21:13 |
|
Notorious b.s.d. posted:semantic versioning Except the only readable version for all the documents you get is 1.1 or 1.0 so who the gently caress cares what the revision is, you're both right and wrong no matter which one you picked, in a way.
|
# ? Nov 14, 2014 21:17 |
|
Subjunctive posted:that's not actually the case, but we also employ the lead developers, which helps This is what I was told by some Facebook engineer. Is there any more documentation about how you're using btrfs?
|
# ? Nov 14, 2014 22:05 |
|
Suspicious Dish posted:This is what I was told by some Facebook engineer. Is there any more documentation about how you're using btrfs? not public that I know of.
|
# ? Nov 14, 2014 22:16 |
|
MononcQc posted:Except the only readable version for all the documents you get is 1.1 or 1.0 so who the gently caress cares what the revision is, you're both right and wrong no matter which one you picked, in a way. seems to me like the complete version string is in the document title: Extensible Markup Language (XML) 1.0 (Fifth Edition)
|
# ? Nov 14, 2014 22:38 |
|
Notorious b.s.d. posted:idgi I mean poo poo like me having to parse a XML file. The version string goes something like <?xml version="1.0" encoding="UTF-8"?>. It's now up to you to hope to figure out which revision that one's about.
|
# ? Nov 14, 2014 23:10 |
|
MononcQc posted:I mean poo poo like me having to parse a XML file. The version string goes something like <?xml version="1.0" encoding="UTF-8"?>. It's now up to you to hope to figure out which revision that one's about. well, 1.0.5 shouldn't have any changes that would break your xml file marked 1.0. that is the point of having minor versions in the first place.
|
# ? Nov 14, 2014 23:16 |
|
Notorious b.s.d. posted:Extensible Markup Language (XML) 1.0 (Fifth Edition) Java 2 Standard Edition 5.0 Update 15
|
# ? Nov 14, 2014 23:17 |
|
Notorious b.s.d. posted:well, 1.0.5 shouldn't have any changes that would break your xml file marked 1.0. that is the point of having minor versions in the first place. Right, but the other thing is that it means that a 1.0 parser may not accept a 1.0 document, even if both respect a 1.0 spec. It's usually really minor stuff that you see when you write an up-to-spec implementation. I've felt the pain of doing that with HTTP and no doubt HTTP is worse there in any case, specifically given how long poo poo can be left to rot in the wild.
|
# ? Nov 14, 2014 23:33 |
|
Suspicious Dish posted:This is what I was told by some Facebook engineer. Is there any more documentation about how you're using btrfs? I confirmed that we have RW deployments in production, yeah.
|
# ? Nov 15, 2014 01:10 |
|
Btrfs tends to flip out if you run RethinkDB on it.
|
# ? Nov 15, 2014 01:40 |
|
wait theres filesystems that people "release" that don't include actually reading and writing data?
|
# ? Nov 15, 2014 02:40 |
|
There are lots of readonly filesystems, yes. ISO-9660 for example, or squashfs.
|
# ? Nov 15, 2014 02:44 |
|
Tiny Bug Child posted:i don't think i'll ever understand the attitude a lot of computer people have that you should just throw your hands up and break when there's a tiny flaw in the input stated like someone who has never written software used by anyone but themself
|
# ? Nov 15, 2014 03:33 |
|
Tiny Bug Child posted:i don't think i'll ever understand the attitude a lot of computer people have that you should just throw your hands up and break when there's a tiny flaw in the input This kind of attitude often leads to security problems.
|
# ? Nov 15, 2014 04:04 |
|
sarehu posted:This kind of attitude often leads to security problems.
|
# ? Nov 15, 2014 04:48 |
|
MononcQc posted:It's usually really minor stuff that you see when you write an up-to-spec implementation. I've felt the pain of doing that with HTTP and no doubt HTTP is worse there in any case, specifically given how long poo poo can be left to rot in the wild. http 1.1 specs are also periodically revised
|
# ? Nov 15, 2014 04:54 |
|
Notorious b.s.d. posted:http 1.1 specs are also periodically revised I know. From the top of my head, those are some I encountered: First version of http/1.1 allowed to send 100 Continue responses as a kind of heartbeat. Later revisions proscribed that use, but asked proxies to still support it. If you talk to a 1.0 client, you can allow the 100 continue to go through once if it sent the header even though it wasn't in the original spec. The two first revisions of http/1.1 were unclear on how to proceed when you encountered both a upgrade and a expect: 100-continue header. The latest revision prescribes a given order to handle them. This still means older versions could expect or rely or other orders. The second edition of HTTP/1.1 allowed the usage of the 'identity' transfer encoding, but left that bit unspecified (the section was missing for the RFC). The spec was badly edited, and the latter 1.1 had it properly removed again. Two first editions of HTTP/1.1 allowed header line folding if the next line began with whitespace, this was removed in the newest revision. The first edition of HTTP/1.1 didn't specify proxy behaiour regarding versions. Second edition made it mandatory for proxies to upgrade requests to the highest version supported (1.1). -- There are still a shitload of corner cases which aren't even getting addressed in HTTP/2 and I could imagine future conflicts for that. These might result in further revisions, which I'm sure will be just great! For example, a response to a HEAD request, a 204 or a 304 response all require you to omit the content-body but return the headers identically otherwise. Many implementations therefore do not calculate the content-length and return no headers related to that. However, the spec mentions that in such circumstances, the connection MUST be closed to delimit the response termination. This give you the case where a proxy should assume an implicit 'connection: close' to be there and could therefore add it later, eliminating keep-alive and cache optimizations for which HEAD requests are often used.
|
# ? Nov 15, 2014 05:27 |
|
I got to read this on a tumblr today:quote:The creator of this language, who I believe is Sam Hughes, deserves some mercy. Perhaps his brain shall be allowed to co-inhabit with a brick, rather than being completely replaced.
|
# ? Nov 17, 2014 00:25 |
|
Today I learned how to detect an end-of-stream condition with the RethinkDB Javascript driver (if you use .next to iterate the stream cursor): if (((err.name === "RqlDriverError") && err.message === "No more rows in the cursor.")) That's how.
|
# ? Nov 18, 2014 17:35 |
|
have you rigged explosives to the desk of the guy who wrote that driver yet
|
# ? Nov 18, 2014 17:38 |
|
That's the Python way to iteration.
|
# ? Nov 18, 2014 17:46 |
|
sarehu posted:Javascript driver
|
# ? Nov 18, 2014 17:47 |
|
|
# ? Jun 11, 2024 09:50 |
|
sarehu posted:Today I learned how to detect an end-of-stream condition with the RethinkDB Javascript driver (if you use .next to iterate the stream cursor): sure would suck if there were a typo in that error message
|
# ? Nov 18, 2014 18:21 |