|
Reo posted:The idea being that as you move from one node to another, nodes similar to that one pop up and you can just keep on navigating like that. Mind map, I think.
|
# ? Aug 4, 2010 21:08 |
|
|
# ? May 15, 2024 00:56 |
Anybody here remember SameGame? It popped into my head for some reason tonight. Found this while doing some googling. I wonder how terrible my attempt at trying to do something like that would turn out... edit: also this fletcher fucked around with this message at 10:00 on Aug 5, 2010 |
|
# ? Aug 5, 2010 08:30 |
|
Not really programming question but code. I have this layout of images that was put on my desk by the designer that I have to implement. I figure "hey, it's pretty complex but I can probably do it with a table some colspan and rowspan." Ignore the fact that tables are horrible and I really should using divs and CSS for everything but looking at the layout my brain struggled to figure out how I would do it without tables. For reference this is the desired layout: edit removed because I really shouldn't be showing people this This is the resulting table from me attempting to create this (using much technological assistance because my head was spinning after previous attempts): edit removed because I really shouldn't be showing people this Now it looks fine in safari (until I add a doctype) and it's completely broken in firefox. Is this something that just cannot be done? Should I give up? Or is there something just broken in my ungodly table? Edit: I'm an rear end in a top hat every time I post in this thread I figure out a solution. The thing was here firefox was unable to figure out that there was 4 columns long that were about 240px each because it's never clearly stated. So, I just added 4 columns to the top of the table and set their widths to be 240px and magic! It works. I hate doing frontend work. clockwork automaton fucked around with this message at 15:58 on Aug 5, 2010 |
# ? Aug 5, 2010 15:49 |
|
I have a CMake question. I have different parts of my codebase in their own little directories, each with a subdirectory containing examples explaining the code:code:
edit: nevermind, I just had to specify the libraries by name instead of location a slime fucked around with this message at 16:51 on Aug 5, 2010 |
# ? Aug 5, 2010 16:33 |
|
I'm working in C++ here, but I figured this isn't really a C++ specific question. We have some IEEE 64-bit floating point data being generated by an ADA program on an ancient SGI box. We are also generating that data (using the same inputs and algorithm) via a C++ program running on a Sun box. In isolated cases (perhaps ten records every 50000 or so) these numbers will be off by 1x10^-8 or so as compared to the data originally generated on the SGI box. We've always simply attributed this to floating-point/rounding error, since they're two different processor architectures; however I've been asked in my spare time to verify that that is the cause. I'm not really sure how to go about it at all, since we've already conclusively proven that the algorithm in question is exactly the same in both programs. The only thing I could think of is to see if there's a way to figure out an "expected delta" between two processors' implementations of IEEE float 64 (and hell, 32), but again besides googling/asking about I'm not sure where to run with that. While I'm researching what I can for myself, does anyone have any thoughts on how I might prove the hypothesis that floating point differences between the two processors is the cause of the differences? Ciaphas fucked around with this message at 20:52 on Aug 5, 2010 |
# ? Aug 5, 2010 20:39 |
|
Ledneh posted:While I'm researching what I can for myself, does anyone have any thoughts on how I might prove the hypothesis that floating point differences between the two processors is the cause of the differences? Is there any reason you can't just move both pieces of code to the same box for testing and verify that inputs that produce different results on different boxes produce the same results when both run on the same box?
|
# ? Aug 5, 2010 21:03 |
|
Nippashish posted:Is there any reason you can't just move both pieces of code to the same box for testing and verify that inputs that produce different results on different boxes produce the same results when both run on the same box? I brought that up after I was handed the assignment and was given the "we can try, but IT security will have to get involved" response which is usually code for "not in YOUR lifetime." So possible, but not likely. Our computer security folks are extraordinarily overzealous. (edit) That and I just checked, we don't have an ADA compiler new enough to compile on the sun box anyway. Ciaphas fucked around with this message at 08:49 on Aug 6, 2010 |
# ? Aug 5, 2010 21:12 |
|
Ledneh posted:I'm not really sure how to go about it at all, since we've already conclusively proven that the algorithm in question is exactly the same in both programs. The only thing I could think of is to see if there's a way to figure out an "expected delta" between two processors' implementations of IEEE float 64 (and hell, 32), but again besides googling/asking about I'm not sure where to run with that. One of the processors is probably using extended precision. Can you post the code, or the hardware specs?
|
# ? Aug 6, 2010 03:42 |
|
the talent deficit posted:One of the processors is probably using extended precision. Can you post the code, or the hardware specs? As I recall the Sun box is an UltraSPARC T2, but I have no idea about the SGI one at all. I can't post the algorithm, no, partly because security (again) and partly because the machine is internet-disabled and gently caress if I'm retyping that beastly thing from memory. I did have an idea for another approach, though: the SGI machine has an old and busted, but functional, C compiler on it--I thought I'd whip together some sort of quick test program and compile/run it on both and check the results compared to a known result. Like perform math (or even simply try to store) on 20-some odd digit 64 bit floats, and see how/where the results become inaccurate on each machine. Assuming I turn off optimization entirely on both compilers, would this give me at least reasonable evidence that the only likely cause is differences in floating point roundoff error? Or would the different compilers still screw me even with simple floating point operations? (Again, assuming the re-written algorithm is correct, which we are quite sure of.) Ciaphas fucked around with this message at 08:50 on Aug 6, 2010 |
# ? Aug 6, 2010 08:45 |
|
It might be worth checking for any floating-point bugs on either chip.
|
# ? Aug 6, 2010 14:36 |
|
Ledneh posted:As I recall the Sun box is an UltraSPARC T2, but I have no idea about the SGI one at all. the ultra uses 64 bit floats try this on the sgi: double x, y, z; x = 3.0; y = 7.0; z = x/y; if(z == x/y) ... if it's using extended precison that comparison will return false, if it's using IEEE 754 64 bit floats it should return true. the sgi's processer should have an instruction to round all intermediate floats to 64 bit, but you're gonna have to find that yourself. the talent deficit fucked around with this message at 15:12 on Aug 6, 2010 |
# ? Aug 6, 2010 15:02 |
|
the talent deficit posted:the ultra uses 64 bit floats Both machines are using IEEE 754 floats then, because both returned true on the z==x/y statement. Ran some test code of my own too, and I can't copy the full output here but results only started to differ at around the 16th digit. Which I guess I should have expected, given the standard. When trying to represent the number 1234567.890123456789, for example, the SGI would report back 1234567.890123457001000000001 (exactly), and the Sun reported 1234567.89012345612223315719 (or something like that, went out to more than 50 decimals). The precision was slightly worse when I stored that number after some computation rather than storing it directly, but I didn't write that down. I'm guessing that over time those differences can accumulate... Ciaphas fucked around with this message at 21:40 on Aug 6, 2010 |
# ? Aug 6, 2010 21:34 |
|
Ledneh posted:Both machines are using IEEE 754 floats then, because both returned true on the z==x/y statement. Unless the Sun is secretly using 128-bit floats, this is probably just a difference in how the standard libraries print decimals.
|
# ? Aug 6, 2010 21:45 |
|
I thought of that right after I posted, yeah. Just used plain old printf() on both machines but I should have realized that they'd possibly be implemented differently. What's my alternative to printing out the values though?
|
# ? Aug 6, 2010 21:50 |
|
Ledneh posted:I thought of that right after I posted, yeah. Just used plain old printf() on both machines but I should have realized that they'd possibly be implemented differently. What's my alternative to printing out the values though? Print the hex value and do it by hand?
|
# ? Aug 6, 2010 21:55 |
|
sund posted:Print the hex value and do it by hand? That'll do. Thanks, I'll go try that. (edit) drat and blast they both have the same hex values. Back to square one. Ciaphas fucked around with this message at 01:52 on Aug 7, 2010 |
# ? Aug 6, 2010 21:58 |
|
As you suggested already, do shut off optimizations. A clever compiler could try shuffling around some of the floating point operations. Or at the least shut off any fast floating point optimization flags if you have access to some on both platforms. As an alternative to doing it by hand, write your own code for spitting out the values from the raw bits, since you might be doing this a lot. At least then you control it on both boxes.
|
# ? Aug 9, 2010 18:43 |
|
Basic data structures question, regarding key/value pairs. Are either hash tables or balanced BSTs (e.g., boost::unordered_map and std::map, respectively, in C++) going to be the best performance I'm going to get on a data set where I might get some 100 insertions (no deletions) but maybe a million lookups over the course of a program? Or is there some better data structure I can use where the number of lookups vastly outweights the number of modifications? The key/values in question here are std::strings to pointers, if that helps. Order is irrelevant and access needs to be random; iterative is once in a blue moon. Ciaphas fucked around with this message at 01:16 on Aug 11, 2010 |
# ? Aug 11, 2010 01:08 |
|
Ledneh posted:Data structures question, regarding key/value pairs. Are either hash tables or balanced BSTs (e.g., boost::unordered_map and std::map, respectively, in C++) going to be the best performance I'm going to get on a data set where I might get some 100 insertions (no deletions) but maybe a million lookups over the course of a program? Or is there some better data structure I can use where the number of lookups vastly outweights the number of modifications? 1) Every compiler* already has unordered_map in the std or tr1 namespace or both, you don't need to introduce another boost dependency unless you have some other reason for doing so 2) Hashes are going to have faster lookup than a BST, although if your load factor is very high ( > .8 or so) this advantage will deteriorate. I believe std::unordered_map keeps the load factor below .5 by default though soooooooo *every compiler that isn't complete garbage
|
# ? Aug 11, 2010 01:19 |
|
Otto Skorzeny posted:1) Every compiler* already has unordered_map in the std or tr1 namespace or both, you don't need to introduce another boost dependency unless you have some other reason for doing so I haven't looked but given the utter heartache I've dealt with on this stupid compiler I'm gonna guess the footnote applies to me. Anyway, boost is already thoroughly ingrained into this thing (and in ways that std or std::tr1 doesn't provide for) so it's kinda moot Wish I knew what you meant by load factor. I'm so bad at this oh god what am I doing in this job If I end up sticking with unordered_map, is the default hash function for std::strings reasonably performant overall (as far as unordered_map::find() goes), or should I hook up another hash for my situation?
|
# ? Aug 11, 2010 01:28 |
|
Ledneh posted:Data structures question, regarding key/value pairs. Are either hash tables or balanced BSTs (e.g., boost::unordered_map and std::map, respectively, in C++) going to be the best performance I'm going to get on a data set where I might get some 100 insertions (no deletions) but maybe a million lookups over the course of a program? No. Ledneh posted:Or is there some better data structure I can use where the number of lookups vastly outweights the number of modifications? Yes. For example, the tree produced by a std::map is not going to be perfectly balanced. You could make a binary search tree that has a linear-time .completely_rebalance() member function, which will result in faster lookups. You could replace boost::unordered_map with a hash table that checks if there are collisions, and if there are, resizes the table until there are no collisions. This will make it faster for lookups (and allow a more efficient overall hash table data structure. In place of a binary search tree, you could generally use a flatter, non-binary tree, whose node sizes are tuned for the size of your cache lines, where each node contains an array of values, either sorted for binary search or containing the preorder flattening of a complete tree. In place of a generic hash table, you could compute a perfect hash function upon each insertion and rebuild the table. If your keys are strings, you could use a trie, optimized for cache efficiency as described above. This is probably not worth it if you only have 100 keys. If your keys are strings, you could use a string type designed to allow word-by-word comparisons, instead of byte-by-byte. This is not an optimization that favors lookup over insertion. With only 100 keys, you could swap out a reliable hash function with a faster hash function. TINAOTFLOI. If you're using a search tree and your keys are strings and you're using the GNU library, use a key type like code:
For hash tables, use the above but only let the hash function look at firstEightCharacters. Maybe only let it look at firstFourCharacters. shrughes fucked around with this message at 01:37 on Aug 11, 2010 |
# ? Aug 11, 2010 01:33 |
|
I sort of understand the motivation for a String Builder class, but do all languages have one? I'm thinking of PHP, Perl, Python, and Ruby. I know C# and Java do. If the others don't, why? Do they not suffer from the same implementation problem? Or do they not care?
|
# ? Aug 11, 2010 18:08 |
|
I can't say about the others, but the Python idiom for efficient string concatenation is to use a list comprehension, or failing that, build a list of strings and then use 'join' on it. Lua is similar except the equivalent function is named 'concat'. If you're using naive concatenation both suffer from the same performance concerns, it's just that there are other ways to get around that than a StringBuilder.
|
# ? Aug 11, 2010 18:28 |
|
ToxicFrog posted:I can't say about the others, but the Python idiom for efficient string concatenation is to use a list comprehension, or failing that, build a list of strings and then use 'join' on it. either way, you'd be using join() at the end unless I'm missing something obvious: code:
|
# ? Aug 11, 2010 18:39 |
|
^^ Yeah, sorry, I didn't set precedence correctly. (use a list comprehension or manually construct a list of strings) and then use join on it.
|
# ? Aug 11, 2010 18:45 |
|
I'm not entirely sure if this fits in this thread, but hopefully it does. I'm attempting to use a .patch file on some of my source code, and where I got it from recommended using tortoisesvn. I installed it, restarted my computer, but couldn't get it working. Are there any other programs I can use (that are hopefully easier) other than tortoisesvn? Alters fucked around with this message at 00:42 on Aug 12, 2010 |
# ? Aug 12, 2010 00:37 |
|
Patch
|
# ? Aug 12, 2010 01:30 |
|
Triple Tech posted:I sort of understand the motivation for a String Builder class, but do all languages have one? I'm thinking of PHP, Perl, Python, and Ruby. I know C# and Java do. If the others don't, why? Do they not suffer from the same implementation problem? Or do they not care? Erlang has something called an iolist which is a (possibly deep) list of binaries (typically utf encoded string fragments), characters and iolists. All the io functions flatten and combine them efficiently when passed them as an argument. They're rather awesome. An example: ["this is", ["a list", " ", "of the last five commenters encoded as binaries: ", [<<"Mustach">>, " ", <<"Alters">>, " ", <<"ToxicFrog">>, " ", <<"Lurchington">>, " ", <<"Triple Tech">>]], "."] When sent over a port/io, it's converted to: <<"this is a list of the last five commenters encoded as binaries: Mustach Alters ToxicFrog Lurchington Triple Tech.">>
|
# ? Aug 12, 2010 03:41 |
|
Mustach posted:Patch Awesome, thanks. I had difficulties searching for anything related to "Patch" on google without coming up with useless results.
|
# ? Aug 12, 2010 07:19 |
|
I've got a feeling I've gone too fancy for my own good and am trying something impossible, but I'd like the confirmation. In c# .NET, I have a class whose constructor takes a Dictionary object. I've made a new class that extends it. In the extension class constructor, I'm trying to request a variable from the querystring, use that to build the dictionary, and then call the base constructor with my new dictionary. There's no clever trick that'd actually make that work, is there?
|
# ? Aug 12, 2010 18:05 |
|
You can get around it by writing a static method that returns the Dictionary that you then pass into the base constructor, like so.
|
# ? Aug 12, 2010 18:31 |
|
That is a very cool trick, thank you.
|
# ? Aug 12, 2010 21:17 |
|
whats a good 'turnkey' blog hosting service for a programming blog? I want something that I don't have to host myself, and can do things like syntax highlighting out of the box.
|
# ? Aug 17, 2010 09:08 |
|
I would assume Blogger should be sufficient enough as Google use it for their development blogs.
|
# ? Aug 17, 2010 10:56 |
|
Blogger can't do syntax highlighting out of the box but it would be relatively simple to include a reference to google-code-prettify or use GitHub's Gist embedding for code snippets.
|
# ? Aug 17, 2010 17:44 |
|
zootm posted:Blogger can't do syntax highlighting out of the box but it would be relatively simple to include a reference to google-code-prettify or use GitHub's Gist embedding for code snippets. SyntaxHighlighter does the job too, I made a post about setting it up here: http://arclite-emp.blogspot.com/2010/07/syntaxhighlighter-caveats.html. I don't do a lot of heavy blogging or anything, but syntax highlighting has worked pretty well so far.
|
# ? Aug 17, 2010 17:47 |
|
zootm posted:Blogger can't do syntax highlighting out of the box but it would be relatively simple to include a reference to google-code-prettify or use GitHub's Gist embedding for code snippets. This will spit out html directly which should be embeddable on anything http://tohtml.com/
|
# ? Aug 17, 2010 20:56 |
|
I have small python script which runs on a number of windows machines and runs shell commands. The script accepts instructions by listening on a port for UDP packets which contain the command to execute and other parameters. The packets I send are completely unencrypted and unverified, so this is a very insecure system, and I am currently relying on obscurity, but I want to change that. It seems the most simple way to do this is to use an SSL wrapped stream socket (not UDP, at least not easily in python). I would have these remote machines listen for connections, somehow verify the connection from the server, and then process the payload in the connection. I would imagine a situation similar to using RSA key authentication with SSH, where the remote machines store the public key paired to the private key, which is kept on the 'server' from which I send commands. How do SSL certificates fit into this? For the simple case of securing basic socket communication between one sender and multiple receivers, is this the right path to take? Or is there a better avenue I haven't considered?
|
# ? Aug 19, 2010 03:46 |
|
xPanda posted:I want my computers to make secure connections and know who they're talking to. How do SSL certificates fit into this? Use TLS (sometimes known as "SSL"). I'm going to use "listeners" and "connectors" to refer to the machines you describe that listen and connect. Make yourself a CA and make it be the only CA that listeners and connectors know about. Give the listeners certificates that your CA has signed, identifying who they are. Give the connectors certificates that your CA has signed, identifying who they are. Have both the listeners and connectors authenticate who they are when doing the TLS handshake. (Normally on the web, only one end, the http server, proves who it is when using the TLS protocol. Here you want both ends to do so.) Then make good test cases to make sure you're really requiring that the client and server both be authenticated.
|
# ? Aug 19, 2010 05:07 |
|
|
# ? May 15, 2024 00:56 |
|
shrughes posted:Use TLS (sometimes known as "SSL"). I'm going to use "listeners" and "connectors" to refer to the machines you describe that listen and connect. Make yourself a CA and make it be the only CA that listeners and connectors know about. Give the listeners certificates that your CA has signed, identifying who they are. Give the connectors certificates that your CA has signed, identifying who they are. Have both the listeners and connectors authenticate who they are when doing the TLS handshake. (Normally on the web, only one end, the http server, proves who it is when using the TLS protocol. Here you want both ends to do so.) Cheers, I think I have something almost working. I'm curious about certificate security though. All parties have to have the CA certificate, and this is what is used to sign certificate requests. Is this secure because you only spread around the CA certificate, and not the CA key? I need to use both to sign a certificate request, but seem to only need the CA certificate for listener and connector authentication. Is that correct?
|
# ? Aug 19, 2010 08:01 |