|
nielsm posted:Congratulations, you just described a broken password hashing system. pigdog posted:To clarify, you'd want to use a different, randomly generated salt (stored nearby) for every hash. (And to actually use a hash function rather than concat(), "hash(password + salt)" was pretty good in the 90s. However passwords themselves don't contain much entropy and these days hardware can be cheaply acquired to crack passwords "relatively" quickly. The answer to that is to use, in place of a standard cryptographic hash function, a password-based key derivation function, which have adjustable work factors so as to scale up with CPU performance increases. PBKDF2 and bcrypt are two such, except that both fold under massively-parallel cracking hardware, which is addressed by scrypt and it's adjustable memory factor. But really, the ultimate answer is that you shouldn't ever roll your own crypto, but use a library that does it right, and can update when what was previously "right" is now considered "horribly wrong only an idiot would do that!"
|
# ¿ Feb 28, 2013 19:24 |
|
|
# ¿ Apr 30, 2024 04:14 |
|
Wheany posted:I think the point is that if you had a global salt and you saw that several users have the same hash, you would only need to brote force the one hash and you would know the password of any user with the same hash. Furthermore, only one user needs a weak password for you to get in. With per-password salts you don't necessarily know which passwords are weak, so you either have to guess which users might have weak passwords, or otherwise iterate through the entire list of users, wasting time on the stronger ones.
|
# ¿ Mar 1, 2013 02:55 |
|
bobthecheese posted:I've seen, a number of times, a password being salted with not only a system-wide salt, but also the username. This adds the security of a good salt, ... I've never been shown evidence as to why this is a bad thing. What's so hard about storing high-entropy salts adjacent to passwords? But I suppose it's a silly question, as anyone who actually knows what they're doing is using a well-vetted library, so all the roll-your-own approaches are going to be amusingly wrong.
|
# ¿ Mar 1, 2013 15:23 |
|
teamdest posted:This isn't a scenario where this is even close to a good idea. The big mistake in security/crypto is making game-time decisions about which methods to use and/or implementing with insufficient knowledge. But there's no harm in reasoning about it as a mental exercise. However, the actual code should be left to multi-PhD experts. Edit: There's nothing fundamentally wrong with the description of credentials storage as "hash(password, unique_random_high_entropy_salt)" except that the hash functions traditionally used are computationally "too simple" such that brute force techniques are now cost and time feasible, but replace "hash" with PBKDF2/bcrypt/scrypt and it's the same fundamental idea. Furthermore, there do exist legacy systems with security implementations prior to 1999, lots of them. At best, folks can gut and replace them as they come across, but there's still knowledge to be had in understanding particular vulnerabilities of any one deficient system. ExcessBLarg! fucked around with this message at 18:00 on Mar 1, 2013 |
# ¿ Mar 1, 2013 17:48 |
|
Tei posted:What is the general opinion here about forcing users some password variety? For example, a 64-character random hex string is a perfectly sufficient password. If your password requirements fail to accept that, your requirements are broken. Essentially, if the password can't be trivially broken with a sophisticated dictionary attack, it's good.
|
# ¿ Mar 1, 2013 19:55 |
|
bobthecheese posted:* provide plenty of entropy
|
# ¿ Mar 2, 2013 03:26 |
|
Hard NOP Life posted:I have to point out that just because someone emails your password doesn't mean they are storing it in plain text.
|
# ¿ Mar 3, 2013 17:43 |
|
Glimm posted:Facebook for Android modifies the VM it runs in on older Android versions to allow it to work even with a 65k method limit Dalvik restricts apps to (65k according to this bug report:
|
# ¿ Mar 5, 2013 21:18 |
|
Zombywuf posted:So hash on the client. A malicious client will just send you the hash from the DB. The way password hashing prevents that is by making the process of obtaining useful credential from a database dump computationally infeasible (or at least very difficult). There's no way of doing that, that doesn't also require the authentication agent (server) do something that's computationally "challenging".
|
# ¿ Mar 5, 2013 21:30 |
|
Suspicious Dish posted:Because it originally looked like this:
|
# ¿ Mar 10, 2013 18:19 |
|
There's no useful way to get the element out of "pop" (or "stack_pop") since the assignment is local to that method only. It actually makes the data structure completely useless since it's "input only". In other words: carry on then posted:Let me guess, they also think Java is pass-by-reference for objects.
|
# ¿ Mar 10, 2013 18:31 |
|
Zemyla posted:FAKE EDIT: Also, I'm pretty sure ptr - 1 isn't even valid Standard C, since it can have undefined behavior.
|
# ¿ Mar 10, 2013 18:40 |
|
carry on then posted:Yeah, but how is the C one just as useless? code:
|
# ¿ Mar 10, 2013 18:42 |
|
Suspicious Dish posted:Not in C. Perhaps I should have put void **elem_out as the signature instead, but it's the same. Java object references aren't like C++ references. Actually they're more like implicit C/C++ pointers. All Java objects are dynamically allocated from the heap (but automatically freed by the garbage collector). The Java "." operator is equivalent to the C(/C++) "->" operator for struct(/class) types. However, there's no "*" or "&" operators, no double-pointers, no pass-by-reference, etc. There's only
Edit: Suspicious Dish posted:That wouldn't make any sense. ExcessBLarg! fucked around with this message at 18:58 on Mar 10, 2013 |
# ¿ Mar 10, 2013 18:50 |
|
I always observe a moment of pondering and reflection upon invoking "#include <ctype.h>". It's rarely a good idea, even if there's no choice in the matter.
|
# ¿ Mar 15, 2013 21:46 |
|
Otto Skorzeny posted:As an aside, MSVC and ICC seem to be pretty sanguine on lea in general, and often replace general arithmetic muls and adds with it
|
# ¿ Apr 4, 2013 19:16 |
|
abiogenesis posted:What do you guys think about R? Off the top of my head, some benefits of R are: The NA "number" for not available or missing values. Truly indispensible when you have input data with missing samples, trying to match vectors/matrices with different dimensions, or have algorithms (e.g., non-circular convolution filter) for which certain output values are genuinely unknown. AFAIK MATLAB doesn't have this, with some folks using NaN (which also exists in R) as a crude approximation. Most functions treat "NA" values reasonably. Any numeric computation language that lacks an equivalent concept is broken in my book. Functions are first-class. "lapply" works like "map" in Python/Ruby/functional languages. I write a lot of R code that's of somewhat functional style when it's too complicated to be vectorized (it usually contains vectorized code internally) and beats the poo poo out of for loops and iterators. And downsides/horrors/WTFs: The language is really "intended" to be used interactively in a terminal (although readline support makes this far better than MATLAB's terminal mode). R scripts are used with the "Rscript" program, which really just invokes the R runtime in a "batch" mode that's essentially equivalent to running the script in the interactive-shell line-by-line. A consequence of this is that the way code is parsed depends on whether it's in a block scope or (interactive) top-level. For example, if you write code in separate-line-for-brace style, e.g., this snippet: code:
R runtime posted:Error: unexpected 'else' in "else" code:
The other downside to R is that it's not awesome with regard to memory management. I'm not aware of a mechanism that allows truly-large data sets (data.frames) to be file backed, so you're limited by those that can fit in RAM. Many operations generate new data.frames/matrices instead of modifying existing ones, so somtimes you'll have to manually delete variables to free memory in the middle of a block, just to avoid running out of RAM or thrashing a lot. Related is the fact that data parsing functions (e.g., read.table) are quite slow due to their flexiblity in being able to automagically parse many formats. These can be sped up quite significantly by declaring column contents and other parsing tricks. Honestly, all this is "fine" and is to be expected when dealing with larger data sets. My complaint is that R doesn't really offer a lot with regard to easy-to-use profiling tools to make these kinds of optimizations easier to implement. Now, all that said, the reason I use R over MATLAB, aside from it's bias towards statistical computing, is because of (i) CRAN, (ii) it's already packaged in most Linux distributions, and (iii) because it's free, it's trivial to instantiate a large cluster of R-running nodes without having to deal with licensing headaches. ExcessBLarg! fucked around with this message at 17:27 on Apr 16, 2013 |
# ¿ Apr 16, 2013 17:22 |
|
Jonnty posted:All this is pretty much fair enough, but the fact remains that you should always use try/except rather than isinstance.
In the absolute worst case, it's somewhat idiomatic in Ruby to include a "to_type" method for objects that implement the interface for a particular type without inheriting from the type's canonical class. For example, you may implement a data structure with a hash-like interface, but not implement from class Hash. In that case though you'd still implement a "to_hash" method that actually converts it to Hash, usually with some unfortuate memory costs. In that case though, it's still prefereable to replace "instance_of(Hash)" with "respond_to?(:to_hash)", and just call the methods directly as if it were a Hash. But even that "nicer" version is still last resort.
|
# ¿ Apr 22, 2013 18:28 |
|
Honestly the whole premise is quite strange to me. I regularly deal with long-running computations that take a day or two to finish one "cycle" of. 1. Six hours isn't really that long. If the cycles are crazy expensive or crazy unobtainable (supercomputer) then sure, but in those cases you make sure your poo poo works well in advance of launchin on those platforms. If something fucks up after six hours on a riff-raff machine, whatever, restart, go to bed. 2. I have difficulty coming up with a scenario where a user, when performing a limited-data test would provide the right kind of dict-like object, but when perfoming a full-data run would not. True, user behavior can't always be predicted, but generally users don't do crazy batshit stuff either, and if they do, it's certainly on them.
|
# ¿ Apr 23, 2013 03:33 |
|
Suspicious Dish posted:
|
# ¿ Apr 28, 2013 16:16 |
|
shrughes posted:
code:
Pardot posted:Why? None of our stuff has anything. tef posted:I was under the impression that copyright was automatic under the berne convention, Or alternatively, if you license your code to some other entity who (lawfully) integrates it into their internal code. Then they later decide to opensource it, the Copyright makes it clear who owns that piece of code. Honestly, keeping track of Copyright and licenses is important stuff. Revision control does help to maintain a trail of authorship, but only if repositories are preserved and code doesn't get packaged outside of them. It's also not really that hard.
|
# ¿ Jun 14, 2013 22:02 |
|
DAT NIGGA HOW posted:Why should I trust this guy's assessment at all? Maybe if he had an alternative that other experts can either agree is done right, then I'll listen to him. The presentation of these not-previously-known facts and the originality of the investigation alone make his post constructive. It's not, not constructive just because it may offend some folks.
|
# ¿ Jul 7, 2013 03:32 |
|
UraniumAnchor posted:I think the eventual consensus was that the assignment in the non-executing block still causes the interpreter to make "what" a local variable from that point on, instead of a function. So I guess Ruby functions aren't truly first class? code:
Thing is, it's pretty rare to define global methods in anything but short scripts and tutorials. If I'm going to write a script that does stuff mostly in the top-level anyways, I'd probably be using lambdas.
|
# ¿ Jul 13, 2013 21:27 |
|
Hughlander posted:OpenSSLs license isn't compatible with Rubys at the source level? Actually, Ruby and OpenSSL is fine, but Ruby scripts that are GPL licensed (without an OpenSSL linking exception), or Ruby scripts that use a GPLed module, cannot be lawfully distributed if they also link (actually, load) OpenSSL. Thus use of the OpenSSL module in Ruby must be very explicit. Amusingly, as Ruby does runtime "linking", the common interpretation of license issues means this program is not lawful to distribute: code:
|
# ¿ Jul 31, 2013 17:37 |
|
vOv posted:Is there any way to write a generic min() function in C that isn't terrible? code:
|
# ¿ Feb 20, 2014 20:58 |
|
mjau posted:
|
# ¿ Feb 20, 2014 23:04 |
|
autoquit - Automatically quit node.js servers when inactive. OK, that's cool. But why would I ever want to do that? Ruben Vermeersch posted:You should really kill your backend at all times. This forces you to keep state out of it. Keep sessions in Mongo, Redis or memcache. Keep all important state out of the app tier.
|
# ¿ Feb 21, 2014 00:23 |
|
Plorkyeran posted:Does node make it easy to accidentally have persistent state? Another thing that node doesn't make easy is gracefully handling folks who connect to your service just as it's autoquitting. Serving null responses and 504s is kind of rude.
|
# ¿ Feb 21, 2014 00:56 |
|
Suspicious Dish posted:So, I don't see the race. It's more that he proclaims "you should really kill your backend at all times" and provides utterly dubious reasoning, that serves no real purpose on a production machine.
|
# ¿ Feb 21, 2014 03:18 |
|
Suspicious Dish posted:Swing and a miss. I would still use braces myself.
|
# ¿ Feb 22, 2014 22:08 |
|
apseudonym posted:E:if you do build error handling into an if else chain of doom you are a bad person and should feel bad. Very very bad. Which yeah, would've resulted in a compile error in this particular case. But there's better solutions such that the "else if" commentary wasn't really worth him making.
|
# ¿ Feb 23, 2014 00:42 |
|
ToxicFrog posted:...both of which require the sftp subsystem,
|
# ¿ Mar 3, 2014 20:14 |
|
down with slavery posted:News to me. I guess I've just never seen that because it makes no sense (we are in the coding horrors thread after all) except as a method to piss sysadmins off. Now, this means that scp/rsync support requires that the user has remote login capability and that those commands are allowed to be executed. Also, the local and remote scp/rsync programs have to be compatible with each other. I don't believe either is standardized, but scp is both and old and simple enough protocol that implementations retain compability with each other, while rsync is a sufficiently complex-but-useful program that everyone uses the same effective implementation. SFTP is a bit different. It's relies on the SSHv2 concept of a "subsystem", which is a mechanism that allows the remote facility to be called by a general name, instead of relying on a specific binary to be available in PATH. The SFTP protocol itself is an IETF Draft standard with multiple implementations, with OpenSSH's implementation being quite common. Anyways, SFTP, since it is called as an SSHv2 subsystem (in the absense of being piggy-backed on a completely different transport), is typically explicitly defined as such. Thus, with OpenSSH, you can turn off SFTP support in sshd_config, but it necesarilly depends on sshd itself being available. Furthermore, SFTP is a "relatively recent", optional addition to the SSH protocol suite, so, you may well come across machines whose SSH installations simply don't support it. Of course, just to make things more complicated, some "scp" programs may internally attempt to use SFTP with the traditional scp remote command as a fallback. It's also possible to transfer files over an ssh connection using non-ssh-specific remote commands (e.g., "ssh user@remote 'cat > ~/dest_file' < ~/src_file"), a mechanism that might be implemented by GUI clients in the event that neither SFTP nor scp are available.
|
# ¿ Mar 3, 2014 21:01 |
|
pokeyman posted:is a complete failure because JSON is not a subset of JavaScript.
|
# ¿ Mar 25, 2014 18:02 |
|
ratbert90 posted:Am I the coding horror? I thought snprintf was fine for known string sizes. Some code only checks snprintf for truncation, assuming that with certain use cases it's impossible to error. Other code checks for both truncation and error, but the only error case being "snprintf(...) == -1)". To me, if you're going to check the result of snprintf at all, it makes the most sense to ensure it's in the range [0, size), since, even if a negative return value other than -1 is implausible/"impossible", it's just easy to check for it too. Even if I "know" the usage of snprintf can't result on truncation or error on the platform, I'd still do a result check as an assert "just in case". If you really know the usage of snprintf can't result in truncation or error, or more likely just don't care (i.e., preparing a string for log output) and specifically don't want to check the result, then casting the return value to "(void)" at least states that it's a conscious decision to not check the return value instead of an oversight. Although in this case, you definitely care if the string is truncated when passing it to popen.
|
# ¿ Apr 20, 2014 22:30 |
|
ratbert90 posted:while((fgets, buff, sizeof(buff), fp) != NULL); to fill a buffer from the output of popen as well. Is there a cleaner way to do that?
|
# ¿ Apr 20, 2014 22:42 |
|
Subjunctive posted:I feel so old. (I may have had to make that argument recently.)
|
# ¿ May 2, 2014 03:42 |
|
eithedog posted:A paranoid version of fopen. Edit: Looks like OpenSSL coding style.
|
# ¿ May 21, 2014 18:48 |
|
Deus Rex posted:Why do you think a university Computer Science program should waste time teaching version control or unit testing? Truth is, there's an expectation that a CS degree includes an element of practice (even what I'd call engineering) where some time is spent covering (implementations of CS in modern) computer systems. Otherwise fresh graduates would end up rehashing the past 40 years of computer systems and otherwise be unemployable. So yeah, I'd expect a fresh CS grad to be aware of the concept of version control, if not be familiar with at least one system and possibly even have some high-level understanding of such a system's implementation. Unit testing is also a very important practice in programming, and big-O is a thing in theoretical computer science. Not being familiar with any of them result in "well, what the hell did you learn?" Edit: It's like a chemistry major being completely unfamiliar with the operation of a chem lab and not being able to perform a titration.
|
# ¿ Jun 6, 2014 00:14 |
|
|
# ¿ Apr 30, 2024 04:14 |
|
MononcQc posted:Pen and paper is still superior to a shitload of stuff and it owns.
|
# ¿ Jun 14, 2014 02:04 |