|
Harvey Mantaco posted:It's tough as hell to google things like "||", google doesn't seem to include it or something. This is precisely why I gave up on learning Perl. Just try to Google anything in this... code:
|
# ? Feb 28, 2012 02:51 |
|
|
# ? May 16, 2024 09:08 |
|
oRenj9 posted:This is precisely why I gave up on learning Perl. Just try to Google anything in this... Perl's magic variables are loving insane. $$ is probably the most ridiculous (it's the PID). also, apparently $a and $b are sort of magic too: quote:$a e: $$ is the most ridiculous looking, there's some magic variables that are extremely odd though Their mnemonics are awesome too: quote:$/ Look Around You fucked around with this message at 03:27 on Feb 28, 2012 |
# ? Feb 28, 2012 03:00 |
|
Look Around You posted:Perl's magic variables are loving insane. $$ is probably the most ridiculous (it's the PID). Matches every shell, not that nuts.
|
# ? Feb 28, 2012 04:15 |
|
MrMoo posted:Matches every shell, not that nuts. Yeah that's not the most insane one. It was just the weirdest looking that I saw immediately. There's also $, $. $' and $` . Pretty much any punctuation after a $ is a magic variable is what I meant to say, though I failed at it. e: Eh, it's not a big deal, it's just that Perl can be cryptic as gently caress to decipher if you didn't write it within the last day. That's pretty widely known though. e2: Doesn't take away from it's usefulness as a "I need to get something done fast and I don't care if it's ugly" tool though. Look Around You fucked around with this message at 04:24 on Feb 28, 2012 |
# ? Feb 28, 2012 04:22 |
|
oRenj9 posted:This is precisely why I gave up on learning Perl. Just try to Google anything in this... code:
|
# ? Feb 28, 2012 06:45 |
|
The Gripper posted:Hey man that's not fair, you could totally just remove all the special variables from your example That's even worse What are you chomping and printing? It's the implicit $_, and you're giving someone nothing to search for instead of the cryptic-but-still-present $_.
|
# ? Feb 28, 2012 14:15 |
|
You just have to know how Perl works. Just read Programming Perl, 3rd edition, straight through, and you won't need to Google things.
|
# ? Feb 28, 2012 14:20 |
|
Lysidas posted:That's even worse What are you chomping and printing? It's the implicit $_, and you're giving someone nothing to search for instead of the cryptic-but-still-present $_. code:
|
# ? Feb 28, 2012 14:39 |
|
Not having used Perl, it feels like two different languages that both manage to run on the same interpreter. There's wonderful, readable Perl, and then there's... that.
|
# ? Feb 28, 2012 15:05 |
|
not now posted:As ManlyWeevil said, the first idea is to use finite differences to approximate the derivative. Higher order formulas and error estimates can be found here: http://en.wikipedia.org/wiki/Numerical_differentiation I'm doing a course in numerical solutions to DEs. I'm not so much concerned about making an algo that spits out 'solutions' (ie approximations), but rather I need to design such that user specified accuracy is guaranteed wile computational costs are controlled/minimized. Kim Jong III posted:I can't really answer this because I'm all math stupid (even though I've got a mathematics degree ) I'm in that thread too
|
# ? Feb 28, 2012 15:15 |
|
For class, we're currently looking at caches, and just looked at profilers (in my case, it was gprof). One of the questions was asking about this scenario: code:
code:
code:
|
# ? Feb 28, 2012 15:33 |
|
Suspicious Dish posted:Not having used Perl, it feels like two different languages that both manage to run on the same interpreter. There's wonderful, readable Perl, and then there's... that. This is why when I'm teaching people Perl, the existence of perlvar is almost the very first thing I cover. Followed by use English;.
|
# ? Feb 28, 2012 16:47 |
|
Master_Odin posted:For class, we're currently looking at caches, and just looked at profilers (in my case, it was gprof).
|
# ? Feb 28, 2012 16:58 |
|
Master_Odin posted:The large "code" I'm running for the test cases is just simply Maybe I'm misunderstanding?
|
# ? Feb 28, 2012 17:02 |
|
Slap -S on there and see what the compiler is doing to your code.
|
# ? Feb 28, 2012 17:26 |
|
Tots posted:What's in the large code block? Specifically code block 2, since that seems to run the fastest The Gripper posted:Is that all the details of the scenario? If it is then it sounds like your teacher is a bit of a dick for not giving you a straight answer. Both should be equivalently fast, the second would be faster than the first if you were caching large_code_block2() calls, and it wouldn't be particularly useful to profile because some optimizing compilers will remove empty loops anyway. us a reading that basically contributed this to hardware wackyness. bar(int) takes far longer to run to completion than foo even though both are doing the same task (the for loop and incrementing a variable). There's probably an answer, but I might just go with "caches" and try and keep asking her till she actually stops being a dick and telling me. Program: code:
code:
|
# ? Feb 28, 2012 19:57 |
|
I'm pretty confident the answer is, in fact, not "caches" because both versions should cache equally well (and be using registers pretty much exclusively anyway). Listen to JawnV6 and look at the disassembled compiled code, because the difference is almost certainly slightly different instructions generated for one loop than the other.
|
# ? Feb 28, 2012 20:10 |
|
Zhentar posted:I'm pretty confident the answer is, in fact, not "caches" because both versions should cache equally well (and be using registers pretty much exclusively anyway). Listen to JawnV6 and look at the disassembled compiled code, because the difference is almost certainly slightly different instructions generated for one loop than the other. Thanks for the help guys, I'm not going to worry about this anymore.
|
# ? Feb 29, 2012 00:06 |
Master_Odin posted:I would, but I have no idea what I'd be looking at here. I say caches because that was the last topic introduced (learning about direct-mapping, full associate, set associative. point of L1 and L2. etc.) and have not in any way covered compiler stuff. If the two chunks of code were both large, and one of the cases was very uncommon, moving the uncommon case to a separate function might make sense as a micro-optimisation, since then there is less chance it will be pulled into the CPU's instruction cache and waste space there when it isn't needed. But that's seriously hypothetical.
|
|
# ? Feb 29, 2012 00:16 |
|
Does anyone know of any blog posts or anything about drawing pie charts? We have done it a few times in Javascript and on the iPhone and encountered all the usual things like labeling, dealing with 101%/99% totals...Just looking for some other views on the process. Heck, even discussion about graphs in general would be a good read.
|
# ? Feb 29, 2012 00:58 |
|
My useless suggestion is to use more informative charts than pie charts. edit: But to be a bit more useful, I'm curious why you're looking for such a thing. Have you found anything interesting or challenging when drawing them in the past? pokeyman fucked around with this message at 01:14 on Feb 29, 2012 |
# ? Feb 29, 2012 01:10 |
|
Master_Odin posted:I would, but I have no idea what I'd be looking at here. I say caches because that was the last topic introduced (learning about direct-mapping, full associate, set associative. point of L1 and L2. etc.) and have not in any way covered compiler stuff. Being able to read assembly dumps is a skill you should pick up sooner or later.
|
# ? Feb 29, 2012 01:10 |
|
Zhentar posted:(and be using registers pretty much exclusively anyway). Nah, at -O0 gcc operates entirely on the stack . In that pastebin foo uses [rbp-20] as i, bar uses [rbp-4]. The actual for loops are compiled to be basically identical, but bar has the overhead of control flow. Master_Odin, if you're interested, bar is executing 2 jumps before it calls foo. You check if the argument i is 0 (cmpl $0, -20(%rbp)), then you check if it's 1 (cmpl $1, -20(%rbp)), then finally store the argument to foo and jump. When i=0, you've still got an extra branch checking for that condition before executing the for loop. I don't understand the profiler output you posted earlier (never used gprof) but you're calling bar twice. On the first time, it figures out the control flow and calls foo, so 'bar' looks like it takes 'foo+overhead' time. Then you call bar again and the very first jump goes a different direction1, then you hit the for loop. So bar has a little extra overhead both times you call it. I'm not sure if that entirely explains the discrepancy, but it doesn't surprise me that bar would take measurably longer to run the 'same' code. Control flow isn't free 1I'm really hesitant to bring up branch prediction, especially in a context like this. Early on in my career, another engineer brought me a snippet of code and said it mispredicted 5 times. I was quick to explain, "Oh of course you'd expect 5, the first one is branch y that goes forward..." and had a neat lil story about how it happened. Another engineer overheard, and came to correct me with their neat lil story about how it happened. Long story short, 3 people well-versed in the microarchitecture were all entirely wrong about how branches were predicted, even when the total count was given in advance. So I don't have much faith in coders predicting predictors a priori.
|
# ? Feb 29, 2012 01:16 |
|
pokeyman posted:My useless suggestion is to use more informative charts than pie charts. We have a thing where you record how many fruits/vegetables you eat each day. One of the screens in the app is a little pie chart that shows what % you've been eating, by color. There's only 5 colors (red, green, blue, yellow, orange) so that's one thing we have going for us. This guy at work has been dinking with it for like 3 days, he's weird. He spent a week on some goofy algorithm to split up a recipe title across multiple lines of text and in the end used something we came up with on the beginning of day 2. pokeyman posted:edit: But to be a bit more useful, I'm curious why you're looking for such a thing. Have you found anything interesting or challenging when drawing them in the past? You run into little things like how to place the labels, especially when you get like 3-4 slices that are very small (1%/2%) and the labels overrun each other. One thing we did is offset them, but I just would like to read others trials and tribulations.
|
# ? Feb 29, 2012 02:00 |
|
Bob Morales posted:Does anyone know of any blog posts or anything about drawing pie charts? We have done it a few times in Javascript and on the iPhone and encountered all the usual things like labeling, dealing with 101%/99% totals...Just looking for some other views on the process. Heck, even discussion about graphs in general would be a good read. Why don't you just use gnuplot or R or something else free that already does this?
|
# ? Feb 29, 2012 02:31 |
|
ultrafilter posted:Why don't you just use gnuplot or R or something else free that already does this? We use stuff like jQplot, and we found a iPhone library that does it (looks better too), but I'm still interested from an academic standpoint
|
# ? Feb 29, 2012 02:52 |
|
Bob Morales posted:We use stuff like jQplot, and we found a iPhone library that does it (looks better too), but I'm still interested from an academic standpoint I can't offhand get you any good documentation or discussion of them. However, pie charts sounds like they should be a fun project to code up by hand- the basic idea isn't hard, so the messy part is the corner cases. And the label placement, ugh. (Labels go in a box somewhere separate. There, done. ) edit: Actually, have something. The ggplot2 package in R is generally a neat piece of plotting software, and there are sometimes interesting discussions around it. Computer viking fucked around with this message at 04:01 on Feb 29, 2012 |
# ? Feb 29, 2012 03:55 |
|
Hi, I'm a guy writing some vbs. I'm quite rubbish at it, and would love some insight from people less rubbish than me. I wrote a simple script to return a list of computer properties in a Microsoft Windows domain. code:
That last property, the Physical Memory.. how do I indicate I only want the result to a couple of decimal places? At the end of the output it does the last computer twice (has two identical entries for the last computer). Can anyone see why easily? Any other suggestion/critiques are welcome
|
# ? Feb 29, 2012 06:23 |
|
Without being constructive, you're going to catch flak for that On Error Resume Next
|
# ? Feb 29, 2012 06:32 |
|
Otto Skorzeny posted:Without being constructive, you're going to catch flak for that On Error Resume Next It's ok. If a decent reason is given with the flak, I'll listen, if it's just 'hurf durf leet coder lol' rubbish then I couldn't care less.
|
# ? Feb 29, 2012 08:09 |
Tony Montana posted:It's ok. If a decent reason is given with the flak, I'll listen, if it's just 'hurf durf leet coder lol' rubbish then I couldn't care less. (computer) "Your brand new car was just crushed in a collision!" (you) "Sucks for you. On Error Resume Next" "The house is on fire! Everyone will die!" "Oh, and? Why should I care? On Error Resume Next" If you don't handle errors you don't know how badly things have gone wrong, or if they have. It's a recipe for disaster, your code will just stop working and you don't know why. For example in your code, the opening of the files in the beginning could fail for any number of reasons, and the script would just go on as if everything was okay which it obviously isn't. Operations can and will fail occasionally so be sure you know what you want to do when it happens. Option A: Pretend nothing will ever go wrong and let the runtime explode with an error as soon as something goes wrong. Option B: Consider error cases for every operation and handle all errors that can be recovered from in a reasonable way, or diagnose+display the error so the operator has a better chance of recovering. Covering your eyes and ears and just charging forward is seldom a good idea.
|
|
# ? Feb 29, 2012 12:39 |
|
Yes, you are right and I see your logic. But consider the context. This is a small, lovely vbs script for collecting trivial information about a list of servers. I'm not writing an application, this isn't even really a tool. It's just a little script that does it's thing. Do I want to think through every possible eventuality and write error handling for each possible error code at each possible junction? No-one else will use my script, just me and I only need it to do this one job (this isn't even my job, I'm just helping someone else who asks and was doing this manually). The loops stop if it hits an error. If I give it a list of 1000 servers and it goes through 723 of them and at the 724th the server responds strangely (or something), the script stops. We don't want that, we want it to continue on and if there are a few weird entries in the logs then they can be manually checked or troubleshot if we really want the script to do it. Surely best practice is also about context and implementation. In the 'real world' you need to achieve goals often in the most efficient manner possible. Yes, best practice is a tool to assist you but it can also paralyse you if you don't temper it with reality and experience.
|
# ? Feb 29, 2012 14:01 |
|
Say I have two tables containing information about the same students. Table1 has information on their locations, grade, etc; Table2 on their evaluations. Some of the information is the same, some different, and some should be the same but are not. Parts of the tables will therefore be compared and a list will be created from that. Is it better to create one superclass, Student, and two subclasses, StudGeneral, StudEvaluation; or is it better to create only one class containing information from both tables? That is, should I care more about the structure of the data (and have two objects for each real life student) or more about the structure of the world (and only have one object)? And what is a good book to read to understand such things? Amazing Spaceship fucked around with this message at 15:10 on Feb 29, 2012 |
# ? Feb 29, 2012 15:07 |
|
Tony Montana posted:The loops stop if it hits an error. If I give it a list of 1000 servers and it goes through 723 of them and at the 724th the server responds strangely (or something), the script stops. We don't want that, we want it to continue on and if there are a few weird entries in the logs then they can be manually checked or troubleshot if we really want the script to do it. At least have it log the error and try to recover as gracefully as it can. Entire companies run on some script some intern hacked together 10 years ago. It's a scary place out there.
|
# ? Feb 29, 2012 15:59 |
|
I have a stupid MS Access question. This is the basic data: code:
code:
Thanks. I have no idea how to word these types of questions for Google.
|
# ? Feb 29, 2012 16:51 |
|
Tony Montana posted:It's ok. If a decent reason is given with the flak, I'll listen, if it's just 'hurf durf leet coder lol' rubbish then I couldn't care less. I'll try and explain it in small words then: It's a bad idea, don't do it. It's a bit like making GBS threads yourself and instead of going to the toilet to clean up, smearing poop everywhere you go. Try using on error goto to jump to a *known* point to recover from. on error resume next has some nice behaviours. If you get an error in an if statement condition, it continues onto the next line, as if the if statement was true.
|
# ? Feb 29, 2012 17:06 |
|
The Aphasian posted:I have a stupid MS Access question. Hmm. It's ugly, but you could do something like this in SQL: code:
|
# ? Feb 29, 2012 17:11 |
|
Thanks for the answer. I got pulled into an unrelated project so I can't try it now, but I wanted to say thanks.
|
# ? Feb 29, 2012 20:57 |
|
Computer viking posted:Hmm. It's ugly, but you could do something like this in SQL: Something MySQLish but I think conforms to most SQLs. SELECT c.*, CASE WHEN c.type = 'President' THEN 1 ELSE RAND(2,100) END as my_order from companies c group by c.company order by my_order ASC Essentially you rank the rows, if the type is President give him a ranking of 1, order it by that field and then group the rows together by company.
|
# ? Feb 29, 2012 22:01 |
|
|
# ? May 16, 2024 09:08 |
|
Ah, that's sort of the same idea - I cast the result of the comparison to something sortable, while you use a case to do the same thing. (I guess you could eek out a bit more performance by using a fixed number instead of rand(), too.) I think depending on group by to return the top row is a mysql-only hack, though: AFAIK most other DBs only allow you to return aggregate functions and the field(s) you grouped on. Computer viking fucked around with this message at 22:11 on Feb 29, 2012 |
# ? Feb 29, 2012 22:08 |