|
The best part is JSON doesn't specify what happens when you have repeated keys.
|
# ? Nov 8, 2015 12:10 |
|
|
# ? May 17, 2024 15:31 |
|
I think the biggest problem with both formats is having to have the "no, don't create XML or JSON using string concatenation, are you loving retarded" conversation every half year or so.
|
# ? Nov 8, 2015 15:25 |
|
Obviously the solution is JSONPlus. Obviously the solution is BSON. Obviously the solution is JSON-Extended. Obviously the solution is JSONX. Obviously the solution is... uh... YAML?
|
# ? Nov 8, 2015 15:56 |
|
b0lt posted:The best part is that json isn't even compatible with the javascript specification. Hmm? The JSON encoding and decoding facilities in the JS standard match the RFC exactly AFAIK. You can't eval JSON reliably, but you shouldn't have been doing that anyway.
|
# ? Nov 8, 2015 16:30 |
|
Suspicious Dish posted:Huh. I never knew about xml:space. But I'm still generally confused by XML's namespaces. The missing link in your understanding is the fact that XML and namespaces are separate concepts, of which the latter maps onto the former. The XML specification does not have the concept of namespaces and names with colons in them are perfectly fine and have no special meaning. The XML specification itself defines two attributes with special meaning: xml:space and xml:lang. They have colons but it is just a part of the name, not some special magic. Might as well be "qwerty". The "Namespaces in XML" specification gives special meaning to names with colons, with names starting with xmlns: having even more special meaning, building up to the namespaces concept you know. In the end, I think the hate against XML is not warranted - sure, the concepts involved can be complicated but I do not think they can be expressed in a significantly improved way without losing capabilities. The world is greatly improved by the existence of XML, it having saved us from a thousand homegrown and half-assed data formats. The price we must pay for this is lovely implementations made by developers in a rush or with a lack of understanding and that is sad but it is also inevitable when dealing with any nontrivial system - do not kid yourself and think this can be avoided by anything but limiting functionality and flexibility to such a small subset that mistakes cost little.
|
# ? Nov 8, 2015 17:17 |
|
Internet Janitor posted:Obviously the solution is JSONPlus. Just use TOML!
|
# ? Nov 8, 2015 17:30 |
|
Vanadium posted:Just use TOML! Is this what I think it is?
|
# ? Nov 8, 2015 18:01 |
|
Oh boy, it is.
|
# ? Nov 8, 2015 18:02 |
|
fleshweasel posted:I don't see what is inadequate about the types of data JSON supports. It seems a more reasonable than XML on that count actually. You never had to put a date in JSON apparently.
|
# ? Nov 8, 2015 19:51 |
Ender.uNF posted:You never had to put a date in JSON apparently. When I've had to, I specified it as a Unix timestamp in UTC. Anyone who fucks that up deserves to fail.
|
|
# ? Nov 8, 2015 19:55 |
|
Then you hosed up because UTC uses leap seconds, whereas Unix timestamps do not. Attempting to parse your "UTC Unix" times with standard Unix timestamp methods will introduce one-second errors. Not to mention that most time libraries will refuse to process dates earlier than 1970 under this scheme. QED EssOEss fucked around with this message at 20:26 on Nov 8, 2015 |
# ? Nov 8, 2015 20:24 |
|
If a one-second error is that much of a problem you should be using TAI.
|
# ? Nov 8, 2015 20:26 |
|
Are we going to have timestamp conversion talk again oh boy.
|
# ? Nov 8, 2015 21:44 |
|
Athas posted:You can just encode everything as a string element, sure, but the primitives are annoying due to inherited Javascript brain damage, most notably by treating all numbers as double-precision floating point. which is actually disastrous because now your non-double-precision floating point number gets thrashed as it's being propagated through multiple systems that don't interpret JSON numbers the same way so it's worse than you think it is
|
# ? Nov 8, 2015 22:18 |
|
ironically the safest thing you can do with a number that you don't want interpreted as double floating point or as an Int32 is to store it as a string in JSON
|
# ? Nov 8, 2015 22:24 |
|
Pavlov posted:Are we going to have timestamp conversion talk again oh boy.
|
# ? Nov 8, 2015 22:28 |
|
Sagacity posted:ISO8601 4 lyfe, clearly! Actually, it should be "ISO8601 lyfe 4"
|
# ? Nov 8, 2015 22:32 |
|
Subjunctive posted:Hmm? The JSON encoding and decoding facilities in the JS standard match the RFC exactly AFAIK. Strings in JSON can contain literal U+2028 and U+2029, they have to be escaped as \u2028 and \u2029 in JavaScript.
|
# ? Nov 8, 2015 22:37 |
|
pseudorandom name posted:Strings in JSON can contain literal U+2028 and U+2029, they have to be escaped as \u2028 and \u2029 in JavaScript. Yes, I understand that. That's why you can't eval JSON reliably. But the JS specification is compatible with JSON because its JSON facilities (since ES5) match the RFC semantics rather than the JS string ones.
|
# ? Nov 8, 2015 23:02 |
|
I don't think b0lt was talking about JSON.parse()
|
# ? Nov 8, 2015 23:04 |
|
http://json-schema.org/latest/json-schema-validation.html#anchor36 why would you ever choose to overload arrays and tuples why
|
# ? Nov 9, 2015 01:53 |
|
A couple years back, WordPress (who is super insistent about having a capital P in the middle) pushed an update with a function called capital_P_dangit, which automatically changes instances of Wordpress (lowercase p) to WordPress (capital P). This broke several people's URLs and was massively unpopular, and yet took 5 months for them to revert the simple change. How many other massively unpopular features are being kept because of a personal crusade on a lead developer's part?
|
# ? Nov 9, 2015 04:01 |
|
quote:The original WoW developers decided that there would be an array to hold your inventory. The first several entries are things that end up on the paper doll, your head and leg slots and such. After that comes your inventory. At some point they wanted to add a bank to the game, so they added that to the end of the array. Players shouldn't be able to access their bank anywhere in the world, as it would break the code. This was handled by adding lots of statements in different places in the code, defining what the array position was where the inventory ended and where the bank begins. This value was hardcoded all over the place, but it isn't just a simple search to find them all. Some math logic may rely on it being constant From Blizzcon's engineering panel. Game dev is special.
|
# ? Nov 9, 2015 04:06 |
|
Doesn't the majority of their quest system run on killing invisible bunnies too?
|
# ? Nov 9, 2015 04:11 |
|
https://github.com/Dirktheman/rsa-codeigniter-library/blob/master/application/libaries/Rsa.php#L33-L87PHP code:
|
# ? Nov 9, 2015 07:07 |
|
no loving way that's what, like, a 9-bit key?
|
# ? Nov 9, 2015 07:29 |
|
BigRedDot posted:Fear not! IBM (as only they in particular possibly could) has created something to give us the best of all possible worlds! Isn't this because newer POWER has some kind of XML hardware accelerator thing?
|
# ? Nov 9, 2015 08:38 |
|
evensevenone posted:Isn't this because newer POWER has some kind of XML hardware accelerator thing? Holy poo poo, that's a real thing. I had no idea
|
# ? Nov 9, 2015 15:30 |
|
I just caught my C compiler ignoring fall-through cases. The kind that'd work in C#, nothing in the case just pure fall-through to the next one. Again, recognizing that I'm the problem, was my error: 1) Using fall-through cases at all 2) Using function pointers to hide the usage and enabling this aggressive optimization
|
# ? Nov 9, 2015 21:31 |
|
JawnV6 posted:I just caught my C compiler ignoring fall-through cases. The kind that'd work in C#, nothing in the case just pure fall-through to the next one. Again, recognizing that I'm the problem, was my error: I'm not sure how any code with well-defined behavior could lead to the compiler optimizing to ignore fall-through cases (assuming you mean that it basically silently inserted break statements at the end of each case). If this were a thing which has even just a very low probability of happening, you'd see loads and loads of broken code, from drivers to databases. I'm also really curious what you mean by 2); function pointers aren't some obscure, dusty C corner, and they're not really tricky for compilers to handle.
|
# ? Nov 9, 2015 21:50 |
|
Pure fallthrough cases are a 100% reasonable idiom. Also, just in case you're not joking, there are no extra undefined or implementation-defined behavior rules around switches; the behavior of fallthrough, missing cases, etc. are all completely defined. And in fact are defined so straightforwardly that I can't believe that somebody could write a C compiler that messes them up.
|
# ? Nov 9, 2015 21:57 |
|
In fact, the fallthrough case is easier thank the break case. It just amounts to "don't write a jump instruction here".
|
# ? Nov 9, 2015 22:03 |
|
rjmccall posted:Also, just in case you're not joking, there are no extra undefined or implementation-defined behavior rules around switches; the behavior of fallthrough, missing cases, etc. are all completely defined. And in fact are defined so straightforwardly that I can't believe that somebody could write a C compiler that messes them up. YeOldeButchere posted:I'm also really curious what you mean by 2); function pointers aren't some obscure, dusty C corner, and they're not really tricky for compilers to handle.
|
# ? Nov 9, 2015 22:14 |
|
JawnV6 posted:It's a little convoluted, but the functions are declared static in one file, pointers passed out through a struct, and only ever called through that struct. I can imagine if everything was right there in a simple loop without that indirection (i.e. "I am calling this switch with cases 4, 27, 18, and 19") it would optimize differently than my current abomination. But what's in the switch blocks? Calls through that struct of function pointers? Calling through a vtable in a switch statement is not exactly an edge case.
|
# ? Nov 9, 2015 22:27 |
|
JawnV6 posted:It's a little convoluted, but the functions are declared static in one file, pointers passed out through a struct, and only ever called through that struct. I can imagine if everything was right there in a simple loop without that indirection (i.e. "I am calling this switch with cases 4, 27, 18, and 19") it would optimize differently than my current abomination. There's really no difference between calling a function directly and calling it through a function pointer, so whatever optimizations are being done should work (or fail) for both cases. Calling through a function pointer uses the exact same ABI as a direct call, all registers and the stack are set in the exact same way, except that the target for the call is stored in a register or known memory location instead of being (most likely) hardcoded with the call instruction. This makes sense as the callee doesn't know if it's being called directly or through a pointer, so the convention has to be the same either way. If your compiler is loving that up, you have bigger things to worry about than whether you should use fallthrough cases or not. Heh, I just remembered the last time I spent something like 15 minutes, including calling one of my coworkers over, because a loop was acting really strange. We were to the point where we were looking at the assembly being emitted and scratching out heads because it made no sense, before finally noticing the semicolon after the for(). Whoops. Deep Dish Fuckfest fucked around with this message at 22:32 on Nov 9, 2015 |
# ? Nov 9, 2015 22:29 |
|
Do modern platforms inline call targets? I thought they mostly went through an indirection because of dynamic loading and relocatable libraries.
|
# ? Nov 9, 2015 22:32 |
|
Subjunctive posted:Do modern platforms inline call targets? I thought they mostly went through an indirection because of dynamic loading and relocatable libraries. If you're calling out to a shared library, no, but within the same object file or with LTO turned on, definitely.
|
# ? Nov 9, 2015 22:36 |
|
It's not really my field of expertise, I'll admit, but I always thought that part of the runtime linker's job when loading an image was to fix up all of the calls that weren't made with position-independent code so that they'd point to the real locations in a process' address space. That does rise some issues with sharing library objects between different processes, though, because you wouldn't be able to use anything that isn't position-independent since you can't fix the call targets in shared code. So you might be right. I should probably look this up. e: /\/\/\ yeah, that makes sense to me.
|
# ? Nov 9, 2015 22:44 |
|
YeOldeButchere posted:It's not really my field of expertise, I'll admit, but I always thought that part of the runtime linker's job when loading an image was to fix up all of the calls that weren't made with position-independent code so that they'd point to the real locations in a process' address space. Generally dynamic linkers copy-on-write the global offset table or equivalent and update it as dependent symbols are located. I guess for intra-file calls you can inline a relative-to-$ip call target, I was thinking absolute addresses. E: more: https://www.technovelty.org/linux/plt-and-got-the-key-to-code-sharing-and-dynamic-libraries.html
|
# ? Nov 9, 2015 22:57 |
|
|
# ? May 17, 2024 15:31 |
|
The way it's usually done is the compiler emits an instruction that assumes a direct static offset (up to the limits of the immediate field on that instruction, which is usually very wide), and the linker inserts a stub if it's not possible to satisfy that constraint. For functions defined outside of the linkage unit, that stub usually resolves the address lazily.
|
# ? Nov 9, 2015 23:00 |