|
Vanadium posted:I am trying to decide whether perl or php's behaviour annoys me more than having to explicitly convert integers when I want to multiple them with doubles in haskell, help me out here. Haskell's number types are always the worst design decision. Always.
|
# ¿ Sep 26, 2010 02:56 |
|
|
# ¿ May 13, 2024 23:55 |
|
shrughes posted:Lies. Haskell's numeric library is actually quite reasonable. If you recognize that designing a numeric library for extensibility is insane anyway. Yeah, it's OK as long as you don't want to create your own number types, because then you don't have to care about the interrelationships between the Num, Fractional, Integral, Real, RealFrac, RealFloat, and Floating classes and the fact that many members were scattered about almost at random between them. It's just kind of sad that the Haskell designers went through all the work of creating an intricate class hierarchy, and it's not even easy to build what should be simple extensions to the numeric types like vectors, matrices, or numbers that carry units. Personally, I'm hoping something like numeric-prelude makes it into some HaskellPrime.
|
# ¿ Sep 26, 2010 06:40 |
|
If you really want to use a monad transformer or something, it doesn't look much worse. It just performs a little weird because you're not automatically strict in modifications anymore and you no longer have a guarantee of mutable state, only the strong reason to believe that the compiler will choose that particular optimization; you can't guarantee mutable structures outside of ST or IO, because that's what they're for.code:
|
# ¿ Oct 2, 2010 22:48 |
|
Janin posted:This isn't an example of mutability; Control.Monad.State is actually just a wrapper around (\state -> (result, newState)) -style functions. IO and ST are the only types which support true mutability. Exactly. The compiler may be able to optimize such accesses into mutation internally, but there's no guarantee it'll do so.
|
# ¿ Oct 2, 2010 22:57 |
|
KARMA! posted:Mind telling us why? A simplified answer: For most hash constructions, you can amortize the costs of a changing suffix more readily than a changing prefix, meaning that you can take a dictionary of unsalted hashes for common passwords and cheaply transform it to a dictionary of suffix-salted hashes for common passwords, allowing more effective attacks against databases of accounts. Please note that this is a broad discussion of one particular class of attack, and there are a whole lot of caveats attached to this description which I'm not going to go into here. The answer you should keep in mind is more general: Cryptographic protocols are really incredibly hard to design well, and they tend to be extremely subtle and fragile, often for reasons that don't appear to make sense unless you have had extensive specialist education in the field. If you don't know exactly why every single decision was made in an existing protocol, you are not qualified to change the protocol. Please don't invent your own cryptographic protocols.
|
# ¿ Dec 18, 2010 02:34 |
|
Jabor posted:It looks to me like the username is taking the place of a nonconstant prefix. Oh, probably, but it's a goofy construction and usernames are lower-entropy and often longer than the random nonces you're supposed to use as salts which will have who-knows-what kind of effect on attackers trying to minimize costs because now they can potentially exploit having the whole first block of the hash being a combination of low-entropy terms; not to mention the possibility of collisions where user foo has password barqux and user foobar has password qux, which all means that the potential attack surface is larger in some really-hard-to-quantify manner. There's just no good reason for using a non-standard protocol which seems like it might have the same security properties as the real thing but hasn't been studied when you can just not write your own cryptographic protocols. Seriously, cryptographic protocols are really hard for lots of small, subtle, incredibly obscure reasons. It's very hard to get the education you need to be able to do a halfway-adequate job of reasoning about possible attacks, the education expires extremely quickly, and even the experts get it wrong a large proportion of the time. It can be fun to see if you can find all the problems in some homemade protocol, but even if you and your best friend and 5000 people on the internet think you found them all, you're simply wrong.
|
# ¿ Dec 18, 2010 03:51 |
|
Mustach posted:Even worse, do they think that return is a function? Clearly return is the continuation implicitly passed to the current function, and if C supported higher-order programming you'd be able to pass it as a parameter to other functions for continuation-passing-style programming. People who use parenthesis on return are just planning ahead to be future-compatible with FunctionalC.
|
# ¿ Jan 23, 2011 23:20 |
|
Kelson posted:That's what I thought at first, except key_length is presumably 0 for the empty string. That means key_end = key + 0, which is really just key. Which skips the for(sptr = key;sptr < key;sptr++) loop? Correct, except the memory block isn't invalid in that case, it's explicitly initialized to zero by the bzero call prior to the block. The value Aleksei Vasiliev chose doesn't come from the code, it's arbitrary. It just happens to have the property that when XORed over itself in 16-byte chunks it becomes zero. The key "4re35na2aTaVasAy4re35na2aTaVasAy" is the same as the key "" is the same as the key "0123456789abcdef0123456789abcdef" is the same as the key "abcdefghijklmnopabcdefghijklmnop" is the same as the key "aaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbb" and so on and so on. ShoulderDaemon fucked around with this message at 04:35 on Feb 24, 2011 |
# ¿ Feb 24, 2011 04:32 |
|
Kelson posted:If key_length is 0, the for loop never executes (no xor takes place in my_aes_create_key) which leaves rkey bzero'd. Kelson posted:If mysql was retarded and key_length somehow weren't 0 in the empty-key instance above, it'd xor *sptr into rkey, which is "invalid" memory (sptr = key; sptr < key+key_length) not the bzero'd buffer (ptr = rkey; bzero(rkey, AES_KEY_LENGTH/8)). Kelson posted:I believe "4re35na2aTaVasAy" is xor'd into each 16-byte chunk; where is that value in the code above? The only thing that's special about about the key that Aleksei Vasiliev chose is that it consists of a 16-character string, repeated an even number of times. Any such key will be equivalent to the all-zeroes key or the empty key. Kelson posted:I may certainly be missing something, I'm just asking what. It appears to be in code not shown in the link (such as within rijndaelKeySetupEnc(aes_key->rk, rkey, AES_KEY_LENGTH)). No? Nowhere within any of the MySQL code or its support libraries will you find "4re35na2aTaVasAy". It's not a value that comes from the code. It's just a value that Aleksei Vasiliev picked to demonstrate the problem.
|
# ¿ Feb 24, 2011 17:49 |
|
Scaevolus posted:That's not how aes_encrypt works, though. It expects to get a (random) 128-bit string. It should raise an error if it's handed a key that's too long or too short, rather than use poor key compression. The problem is that by allowing long keys, it encourages misuse (such as inputting an English passphrase) and compounds that problem by using a key compression technique which may significantly reduce entropy in highly structural keys (such as, for example, an English passphrase). If you're going to do key compression, there's essentially no case where the algorithm MySQL uses is the right choice. The coding horror is that it tries, in a completely braindead and unsafe manner, to correct for a programmer error instead of just failing.
|
# ¿ Feb 24, 2011 21:28 |
|
OCaml will be easier to learn than Haskell if you are coming from a strict language background. You should still learn both, though; Haskell is fun.
|
# ¿ Mar 4, 2011 21:09 |
|
rjmccall posted:They're still well short of Turing complete. You can embed arbitrary Perl expressions in a Perl regexp.
|
# ¿ Apr 14, 2011 02:23 |
|
yaoi prophet posted:What's that from? It's pretty obviously Haskell, I'm guessing it's some kind of funky metaprogramming stuff? Looks like part of some Template Haskell for some option-parsing library; that's producing a Haskell expression at compile-time by direct manipulation of syntax trees to serve some purpose. Template Haskell is pretty ugly to begin with, and the indentation is not helping at all, there.
|
# ¿ Dec 12, 2011 10:07 |
|
Janin posted:yup Use AppE in `infix` position if you have to use it at all and it tends to get a bit more readable. If you're doing this a lot, just make an operator for it. You don't usually need to do things like ListE (map (LitE . StringL) longs) when you can just [|longs|]. I try to avoid LitE altogether. In general, TH looks a lot nicer if you use the quoters to do as much heavy lifting as possible. Remember that you can use Template Haskell in your Template Haskell. code:
|
# ¿ Dec 12, 2011 21:31 |
|
TRex EaterofCars posted:I will defer to those who know better but isn't the Maybe monad what's doing all the magic there? The Maybe type is indeed what is in play there, but the fact that Maybe happens to be a Monad isn't relevant. The instance of Functor for Maybe (a much simpler categorical structure) is all that's required for that code.
|
# ¿ Mar 21, 2012 03:50 |
|
Hughlander posted:Personally I think GPL v3 has pushed a lot of people into a more MIT/BSD/Apache style license. I used to release my code under GPL v2, but now I tend to use MIT instead. I don't quite get this. If GPLv2 does what you want, then why don't you release software under GPLv2?
|
# ¿ Mar 23, 2012 01:29 |
|
Internet Janitor posted:What I'm saying is, why are you so concerned about forcing other people to open-source their derivative works? What if a software engineer working for Oracle studied some code in Clang and then used the knowledge he learned to improve proprietary software without directly stealing a single line? Isn't that "profiting without giving anything back"? I would argue that's not a derivative work under US copyright law. Fair use allows studying or works "inspired" by other works; if there's no direct copying, there isn't derivation. (IANAL, This Is Not Legal Advice)
|
# ¿ Mar 23, 2012 02:37 |
|
Internet Janitor posted:best of all, placing their works in the public domain. Please never ever ever encourage someone to place code in the public domain. Please don't do it yourself. The WTFPL is enormously better than public domain declarations. Why? Because in a lot of countries, it is actually impossible for an author to release their own work into the public domain. Or it requires a specific registration, which may cost money. Or it requires auditable documentation of everyone who may have ever contributed to the work. Or it only applies to people who received the work by physical mail. Or numerous other crazy restrictions. And that means that in a lot of the world, you just can't use this supposedly "public domain" software. Any license is better than public domain declarations. Yes, I have run into problems from this on real-world projects, and I've had to harass some poor coder into agreeing to give me their software under the MIT license instead of "public domain" which just wasted a few hours of their and my time so that I won't get attacked by a lawyer. I know it seems like "public domain" should be the least-restrictive thing you can do, but it's really not. Because of the specific legalities involved in that phrase, it's a huge tarpit that is better off avoided.
|
# ¿ Mar 23, 2012 04:22 |
|
Suspicious Dish posted:Verify against the reference implementation? PBKDF2 is described as agnostic to the underlying cryptographic primitives, as well as being parametric in iteration count and key length. This means that there may not be a "reference implementation" easily available for the particular combination of parameters chosen for your implementation. It is much harder for amateurs to verify correct implementations of PBKDF2 than it has any right to be.
|
# ¿ Apr 26, 2012 07:52 |
|
I think the XML-ish completely unhelpful response to that complaint is that in <foo>12345</foo>, the 12345 is obviously of type foo, and you have to know the context of the message if you want a more concrete interpretation.
|
# ¿ Apr 27, 2012 21:50 |
|
Suspicious Dish posted:Right, so the type data is out of band, which means that the logic that can be implemented by you by a generic XML parser becomes less and less, which means that "generic toolkits" invent their own hacks because people are lazy. Yeah, it's kind of shameful that so many supposedly "generic" intermediate layers are prone to simply throwing out information and relying on the most ambiguous representations they can possibly get away with. Horror: I have actually written modern code (in the last 2 years) for a new application that used Sun RPC to communicate. I was working in Haskell. Horror2: It was pretty much the best experience I've ever had writing IPC code, and I have little doubt that time will show it to be the most reliable IPC code I've written.
|
# ¿ Apr 27, 2012 22:08 |
|
Ensign Expendable posted:That is weird. Why not 255? Trailing dot + 3-character extension + NUL byte.
|
# ¿ Sep 6, 2012 02:41 |
|
Manslaughter posted:Not the same problem, but this reminded me of when I found a if (!(!(object))) once. The !!expr construct is a somewhat common pattern in a few languages; you can think of !! as the cast-to-bool operator, because it normalizes the following expression to be exactly true or false, which is sometimes desirable.
|
# ¿ Jan 15, 2013 22:30 |
|
qntm posted:Perl even has unless { ... } else { ... } although thankfully there is no elsunless. Yet. Well, there's unless ( ... ) { ... } elsif ( ... ) { ... } else { ... } which is arguably worse because now we're mixing positive and negated conditionals.
|
# ¿ Mar 9, 2013 02:06 |
|
yaoi prophet posted:You shouldn't operate + for strings because strings don't form a group under concatenation They totally can, though, if you introduce strings where every element can either be positive or negative. If a positive character is next to a negative character with the same character value, then the two cancel out and the string gets shorter. Such strings are a trivial generalization of "normal" (or "strictly positive") strings, and readily form a group under concatenation. They also let you unprint stuff, or untransmit things from the network. Obviously a useful feature, hopefully we'll see these in C++14.
|
# ¿ May 7, 2013 20:24 |
|
I might argue that if a type has a well-known cast-to-bool operator with very "expected" semantics for that type, then it should not also have overloads for the short-circuiting operators, because when reading code that used them I would be expecting it to act like a bool and might be surprised that my control flow isn't what I expect. That said, I don't see anything wrong with using operator overloads for && and || if your type doesn't pretend to be a bool at all. The confusion goes away completely. Also, now I want C++14 to give me something like "int &&&i" as a parameter which produces a thunk for lazy evaluation.
|
# ¿ May 7, 2013 21:40 |
|
Just use nickle instead, honestly.
|
# ¿ Aug 27, 2013 22:43 |
|
Here's some code I just wrote that I'm ashamed of. The idea is, I have a variadic template that I'm using in a few places. I need to write a function which takes as parameters a number of strings equal to the number of types I'm passing to this variadic template. As near as I can tell, expanding a parameter pack while ignoring its actual expansion is not trivial. C++ code:
|
# ¿ Oct 10, 2013 01:57 |
|
Plorkyeran posted:Trivially better: Thanks, my code is now less ugly! Plorkyeran posted:set_dataType_names is a pretty horrifying name. The actual name in the code is newEventType, and it takes a few other parameters; I just wanted to make the usecase in the code sample somewhat more comprehensible.
|
# ¿ Oct 10, 2013 02:11 |
|
Tesseraction posted:Okay, but how does changing "x > y" to "x != y" not count as a fundamental change in the behaviour of a program? Over the set of values for which the program behaviour is defined, in this context those two operations are the same. For all other values, because the behaviour isn't defined anyway, it doesn't matter which operation you choose. There is no situation where the behaviour both is defined, and differs between the two operations, so there is no fundamental change. Hence, a legal optimization.
|
# ¿ Feb 5, 2014 20:45 |
|
Steve French posted:Which would be awfully nice for the compiler to know, but it doesn't. This isn't how undefined assumptions work in compilers. The compiler does know that, because if you didn't pass a large enough array into the function, then the result would be undefined, and the compiler is allowed to assume that your program is well-defined. Compilers don't check to make sure that a program is well-defined, they simply assume that it is and perform optimizations within that context, because it doesn't matter if the optimizations break not-well-defined programs; those programs were already undefined according to spec anyway.
|
# ¿ Feb 5, 2014 23:13 |
|
v1nce posted:Note that heavy calls to /dev/urandom will expire the cache of available random values, and can result in a pause while more are generated. urandom is the one that never blocks. random is the one that arbitrarily blocks when you use it heavily for not-very-good reasons.
|
# ¿ Jul 9, 2014 01:04 |
|
LeftistMuslimObama posted:Forgive me if this is a dumb question, but I don't really get the point of function-like macros anyway. Is it just that they're inlined so you don't add a level to the stack for small computations? It seems like most of the time that level of optimization would be unnecessary, and if I'm doing that calculation often enough that I need a shortcut for it, I'd rather make it a function in a header file that I can use in other source files or projects. One "nice" thing about macros is that they can be non-syntactic. In our codebase at work, we have macros along the lines of: C++ code:
C++ code:
It's not like we couldn't work without it, but it makes some parts of the codebase a lot nicer to deal with. That said, a lot of the "obvious" uses of macros are either terrible poo poo nobody should want to do, or are doable with just a little more work using either normal functions or templates. Macros have their place, mostly in reducing code that you would otherwise be forced to manually duplicate at every usage site.
|
# ¿ Mar 27, 2015 16:14 |
|
I once shipped some code that connected to a postgres database through a handle named mySQL.
|
# ¿ Aug 12, 2015 22:40 |
|
TooMuchAbstraction posted:For marketing purposes, I fail to see why a slightly-imperfect filter is so awful. False negatives (valid addresses marked invalid) will not hear from us; false positives ("wrong numbers") will either bounce or accidentally contact the wrong person. But either way as long as the failure rate is low, we'll end up with a collection of addresses that are largely valid, which is a hell of a lot better than a "try the address and see if it delivers" approach. I think most of us are assuming that you are vastly overestimating the cost of a false positive. Email bounces and misdeliveries are cheap. Your marketing coverage will be better (zero percent false negative rate!) if you just don't do any validation, and your costs will be very similar - there's no downside, a nontrivial potential upside (assuming you believe your marketing works), and it's less work for you! Fundamentally, the validity of an email address carries no meaningful information up until the point at which you are actually about to send an email; at that point, the easiest and most accurate way to determine validity is to simply send the email. Doing anything else just means that you'll piss off people who's email addresses are incorrectly rejected, at little-to-zero gain.
|
# ¿ Feb 18, 2016 04:17 |
|
Dr. Stab posted:Oh, sure, obviously. Now what is the regex to simulate a turing machine? In Perl, $foo =~ s/.*/$&/ee; is the same as $foo = eval $foo;.
|
# ¿ Feb 19, 2016 02:15 |
|
code:
Works in quite a few different languages.
|
# ¿ Jul 28, 2016 01:22 |
|
JawnV6 posted:Now that I say that, I'm not really sure if you can get an interrupt in the middle of a macro-fused pair of instructions. Gosh, if only there were a micro-architect around to help! I just skimmed through the simulator, and as near as I can tell, the Core microarchitecture never interrupts in between macro-fused ops. External interrupts are not guaranteed to arrive at the end of any particular instruction, just at the end of some instruction (or at a few other points, e.g. sync points within rep movs), so an external interrupt is simply deferred to the end of the pair. There are a few architecturally-visible interrupts (in/out instructions, disabling/enabling interrupts, hardware breakpoints) that could in theory happen on the first macro-op of a pair, but we take care to simply not fuse ops that are vulnerable to those. Signed, a microarchitect.
|
# ¿ Feb 14, 2017 21:16 |
|
JawnV6 posted:Ok, interrupt deferral makes sense. But you can still fault after the cmp and before the second op though? Put the jne on a different page requiring a fault or better yet straddling the canonical boundary. Everyone gets that edge wrong. I imagine those situations will never get fused though, the FE will recognize it needs to issue the fault before the decoder ever sees the pair. Making the fusing logic very simple and conservative also means that it takes less power, less die area, and is easier to do timing for. It wouldn't be shocking to see rules like "we only fuse if one of the instructions is a MOV and the other instruction is a non-memory-op and both instructions are coming out of the stream cache" still being good enough to see 1% improvement on some benchmark that some client cares about. If you can get a 1% improvement for what might as well be free then you're going to take it, especially if it's the sort of thing that you can potentially tune and improve in future generations to be even better. JawnV6 posted:There's no good way to fuse up a zero-length call though. The STA/STD pair required for the call's implicit push getting fused and not offering an instruction boundary between them, if the implicit stack location had to be paged in just to be marked dirty, etc. Too much going on there.
|
# ¿ Feb 15, 2017 03:31 |
|
|
# ¿ May 13, 2024 23:55 |
|
JawnV6 posted:This is where it's sorta obvious my info is 5+ years out of date, I didn't know MOV was viable for any fusing. Register renaming makes some instructions quite light, but I thought that mechanism was separate from fusing. Stream cache also implies all goofy page/memory boundaries aren't going to be relevant. JawnV6 posted:Hmph. Binary translation shouldn't be that esoteric. C'mon, get things in gear over there.
|
# ¿ Feb 16, 2017 00:54 |