|
JawnV6 posted:You're going to be calling the power() function multiple times. It seems like you're only focusing on the final instance it's called. You should write out explicitly what the code does, starting from the function call power(5.6,2). And d should never be incremented or changed. When you take 5.6 to a power it has nothing to do with 6.6 or 4.6 or any other number. The only thing that changes from call to call is the exponent, because 5.6^n = 5.6*5.6^(n-1).
|
# ? Feb 7, 2012 18:08 |
|
|
# ? May 15, 2024 10:53 |
|
The key thing to remember with a recursive method like that is that the system 'unwinds' itself all the way first and then 'collapses' back on itself to return the result. It seems a bit like you're fixated on the return statement from the first call to the power method, but that return statement is actually the very last statement which executes - after all of the other calls the method makes to itself depending on the value of i. The if statement which also seemed to be catching you out a bit is simply there to define the 'collapse' point in the recursion - there has to be a point at which the method stops calling itself otherwise you'd have an infinite loop. With this method the recursion ends when i reaches zero, i.e. d has been multiplied i times. The return value at this point is 1.0 because that means this final call to the power function won't affect the calculation (because d*1.0 is still d). As Eggnogium mentioned, d is never incremented. I think another thing which may be confusing you is that you're trying to find some kind of 'result' variable which in this case doesn't exist - the result is held in the return value passed by each call of the power method back up to its parent caller. Recursion can be a headache if you don't intuitively get it (which I don't, I had to teach myself to 'get' it) so I wouldn't worry too much that you're having a hard time. As has been suggested, try writing out the execution path (or stepping through the code) to see if that helps you. If you're writing it out, remember to include the return value after each call to the power method, as that's your 'result' variable and the combination of all of the return values is what yields the final value. edit: Unrelated to your problem, but I imagine your prof/teacher will mention at some point that recursion is generally very inefficient when it comes to memory use. If you write out the execution of that example you'll probably understand why: the whole 'unwind then collapse' structure requires that, at the point where the recursion reaches its outer limit, the data for every call to the method is in memory at once. In contrast iterative execution is much more memory efficient but can be slower, depending on the problem being tackled. rolleyes fucked around with this message at 23:51 on Feb 7, 2012 |
# ? Feb 7, 2012 19:08 |
|
Sab669 posted:In my algorithms class we're going over recursion. I understand the concept, but the specific example we did I'm having a tough time reading.
|
# ? Feb 7, 2012 21:27 |
|
rolleyes posted:In contrast iterative execution is much more memory efficient but can be slower, depending on the problem being tackled. In certain cases it can also be confusing as gently caress, rather than a simple recursive call.
|
# ? Feb 7, 2012 22:11 |
|
rolleyes posted:Unrelated to your problem, but I imagine your prof/teacher will mention at some point that recursion is generally very inefficient when it comes to memory use. I hope so, it would also be nice if the professor made them write iterative and recursive versions of the same algorithm just so the class can easily see why recursion is more simple to and why it is less efficient. code:
Just so I don't get criticized for posting that code, I realize that the method I posted was not very efficient, it was intended to look as close to the recursive solution as possible. private static double power(double d, int i) { double r = d; while( (--i) > 0 ) r = r * d; return r; }
|
# ? Feb 7, 2012 23:45 |
|
Mr. Crow posted:In certain cases it can also be confusing as gently caress, rather than a simple recursive call. Just as a f'rinstance, and one of the classic examples of recursion, Fibonacci. First we'll do an iterative version. code:
code:
Edit: Apparently iterFib's O(n^2). recFib is still the second most inefficient algorithm ever, after bogosort. darthbob88 fucked around with this message at 00:36 on Feb 8, 2012 |
# ? Feb 7, 2012 23:49 |
|
That implementation of recFib is actually O(F_n), where F_n is the nth Fibonacci number.
|
# ? Feb 8, 2012 00:14 |
|
And the implementation of iterFib is actually O(n^2).
|
# ? Feb 8, 2012 00:23 |
|
shrughes posted:And the implementation of iterFib is actually O(n^2). How?
|
# ? Feb 8, 2012 00:28 |
|
yaoi prophet posted:How? Addition is O(n). Or if we're restricting input size to log_phi(INT_MAX) then the program's O(1) anyway.
|
# ? Feb 8, 2012 00:31 |
|
You're using two different values for n simultaneously if we call the iterative fibonacci implementation O(n^2) (namely, once we are using the size of n, the other time the value)
|
# ? Feb 8, 2012 00:38 |
|
shrughes posted:And the implementation of iterFib is actually O(n^2). No definition of n will make this true. You have n addition operations, and (2n + 3) assignment operations for a literal total of 3n + 3 operations for input n. Edit: oh poo poo I forgot the for loop math, better make it like 5n but it's still O(n) baquerd fucked around with this message at 00:48 on Feb 8, 2012 |
# ? Feb 8, 2012 00:41 |
|
If you want to include the cost of addition in your analysis, it's O(log(n)), not O(n). That's unreasonably picky for an intro class, though.
|
# ? Feb 8, 2012 00:44 |
|
ultrafilter posted:If you want to include the cost of addition in your analysis, it's O(log(n)), not O(n). That's unreasonably picky for an intro class, though. What do you mean? Addition is one clock cycle even if you want to be anal.
|
# ? Feb 8, 2012 00:47 |
|
gently caress you for making me do this, but he's right. If you include addition then it's O(n) (where n is the input to the function), because there are O(n) additions and the k'th one will take O(k) time since the kth Fibonacci number has k * log_2 phi = O(k) digits. So the total time will be O(n^2).
Opinion Haver fucked around with this message at 00:49 on Feb 8, 2012 |
# ? Feb 8, 2012 00:47 |
|
baquerd posted:No definition of n will make this true. You have n addition operations, and (2n + 3) assignment operations for a literal total of 3n + 3 operations for input n. Integer addition is not a constant time operation except for subsets of integers (commonly, those from -2^31 to 2^31-1, 0 to 2^32-1, -2^63 to 2^63-1, etc). ^^ I see that I'm off in at least one of these Blotto Skorzany fucked around with this message at 00:49 on Feb 8, 2012 |
# ? Feb 8, 2012 00:47 |
|
yaoi prophet posted:gently caress you for making me do this, but he's right. If you include addition then it's O(n) (where n is the input to the function), because there are O(n) additions and the k'th one will take O(k) time since the kth Fibonacci number has k * log_2 phi = O(k) digits. So the total time will be O(n^2). Maybe I'm forgetting my hardware (and this is all junk since this looks like java and these would then be 32 bit integers anyway), but what modern architecture will go through a clock cycle per bit even on above register size integer addition? For what it's worth in an intro class, O(n^2) is not the answer they are looking for.
|
# ? Feb 8, 2012 00:53 |
|
baquerd posted:What do you mean? Addition is one clock cycle even if you want to be anal. That's only true if integers have a fixed number of bits. If you want to do arithmetic with integers of arbitrary size, you need time proportional to the length of their representation. Edit: Again, this is not something that an introductory class cares about.
|
# ? Feb 8, 2012 00:55 |
|
ultrafilter posted:Edit: Again, this is not something that an introductory class cares about. Actually, it is - CLRS and DPV both go through the time complexity of arbitrary width addition and multiplication in early chapters. Recall he said this was for an algorithms class, not CS1 (although how someone got to algorithms without being familiar with recursion already I don't know. Unfamiliar with the mathematics of recurrence relations, sure, not familiar with the concept, uh...).
|
# ? Feb 8, 2012 03:09 |
|
It's not uncommon to have a "intro to data structures and algorithms" course in the spring semester of freshman year. I can imagine a program that delays teaching recursion until then. That said, to me, not understanding recursion is really a problem with not understanding function calls properly, and I would certainly hope that the first course would cover that.
|
# ? Feb 8, 2012 03:26 |
|
Otto Skorzeny posted:You're using two different values for n simultaneously if we call the iterative fibonacci implementation O(n^2) (namely, once we are using the size of n, the other time the value) No we aren't, those are the same value of n. The length of fib(n) is log_2(phi) * n + O(1). The cost of addition grows linearly through the course of the loop. So the total runtime is O(n^2).
|
# ? Feb 8, 2012 04:06 |
|
Ok I actually have a question re: complexity pertaining to this. If you memoize the results for recFib, does it change the actual complexity of the function or does it just speed it up without actually altering the actual O(f) measurement? I would think it doesn't actually change the complexity because of worst case costs and lookups and stuff but I honestly have no clue.
|
# ? Feb 8, 2012 05:02 |
|
Look Around You posted:Ok I actually have a question re: complexity pertaining to this. If you memoize the results for recFib, does it change the actual complexity of the function or does it just speed it up without actually altering the actual O(f) measurement? It's a pretty significant speedup. If you unwind the recursion for, say, f_20, you end up computing f_18 on one call, and f_19 on the other. But if you dig into that call for f_19, you'll see that you're computing f_18 again. That's where memoization really helps.
|
# ? Feb 8, 2012 05:12 |
|
Am I really in the minority here in assuming that the pseudocode indicated fixed-width integers?
|
# ? Feb 8, 2012 06:35 |
|
ultrafilter posted:It's a pretty significant speedup. If you unwind the recursion for, say, f_20, you end up computing f_18 on one call, and f_19 on the other. But if you dig into that call for f_19, you'll see that you're computing f_18 again. That's where memoization really helps. Oh, yeah I know that it'd speed it up, I'm just wondering if it affects if the algorithm is still O(2^n)
|
# ? Feb 8, 2012 11:40 |
|
GrumpyDoctor posted:Am I really in the minority here in assuming that the pseudocode indicated fixed-width integers? Well, you're at least not alone. I guess the question boils down to "what's the complexity of this in the typical implementation" vs "in the general case". edit: As for the memoization, hmm. I can't really be bothered to work it out right now (I'm at work and should really be doing something else), but I have a feeling it will get it down below O(Nē). I did some empirical testing, though. Given a simple python implementation of the two variants and some timing, I got absolutely beautiful timing results for the recursive case, closely approximating T(n) = T(n-1)+T(n-2), starting around n=4. code:
. N on the X axis, time on the Y axis. The idea was that if the actual function was e.g. O(N + smallConstant * Nē), subtracting out the O(N)-component would make it easier to see the shape of the rest ... but I'm not sure how much that helped. edit2: Of course, this is as much a test of the dictionary class in python as anything else. The code I used was this: code:
If we operate with O(1) addition and memoization operations (e.g. fixed-width ints and a preallocated zeroed array of fixed-width cells in storage with real random access) the entire thing seems to be O(N). Given variable-width ints it's O(N*log(N)). edit4: drat, I was supposed to be doing things with research data at work. Computer viking fucked around with this message at 16:48 on Feb 8, 2012 |
# ? Feb 8, 2012 13:42 |
|
I remember my big stumbling block to understanding recursion was the big O cost of addition.
|
# ? Feb 8, 2012 16:41 |
|
Is the big O notation really at all necessary for understanding recursion? For the finer points of performance and memory tradeoffs, sure. But I figured out recursion long before I even heard of the big O, once I had a good real life application for it, everything just kind of fell into place somehow. I'm not trying to say I'm some genius programmer either, quite the opposite in fact
|
# ? Feb 8, 2012 20:22 |
|
mobby_6kl posted:Is the big O notation really at all necessary for understanding recursion? For the finer points of performance and memory tradeoffs, sure. But I figured out recursion long before I even heard of the big O, once I had a good real life application for it, everything just kind of fell into place somehow. I'm not trying to say I'm some genius programmer either, quite the opposite in fact It's not necessary for understanding the concept of recursion, but for being able to compare the performance of recursive and iterative implementations of certain algorithms.
|
# ? Feb 8, 2012 21:30 |
|
In a general sense, what are some good ways to approach getting familiar with a code base that you did not write and the authors are unavailable? I'm inheriting a convoluted project and all the previous authors have left the school and I'm looking for general reading advice.
|
# ? Feb 8, 2012 21:35 |
|
ManlyWeevil posted:In a general sense, what are some good ways to approach getting familiar with a code base that you did not write and the authors are unavailable? I'm inheriting a convoluted project and all the previous authors have left the school and I'm looking for general reading advice. One way is to start by identifying a problem you'd like to solve, or some feature where you'd like to see how it works. Trace the code execution from the closest known point (in a standalone application this may be the entrance point), and step through the code, following tangential method calls minimally. Generate a theory on why the problem exists, if it is not apparent. Now pop open a debugger and test your theory with the program state, fix the problem and/or move on. Repeat that a few dozen times with different areas and you will start to develop an idea of how everything fits together while possibly accomplishing goals at the same time.
|
# ? Feb 8, 2012 21:54 |
|
I would say that trying to understand a specific problem is a good guide (and a good way to accomplish something with your learning), but the important part to learning reasonably quickly is the debugger part. In my personal experience, nothing has been a more effective, time efficient way to get a solid understanding of foreign, complex code than 10-20 hours stepping through code in a debugger (even including 'read documentation' and 'ask the author' when either is available).
|
# ? Feb 8, 2012 22:27 |
|
I didn't know if this was too basic for the game thread or not. If I have a tool that reads text files and converts characters to a MapTiles collection, do I keep those text files as part of my solution or do I store the final MapTiles collection in some manner?
|
# ? Feb 10, 2012 09:39 |
|
gwar3k1 posted:I didn't know if this was too basic for the game thread or not. If I have a tool that reads text files and converts characters to a MapTiles collection, do I keep those text files as part of my solution or do I store the final MapTiles collection in some manner? Whichever is more convenient for you and/or the user? Storing the serialized MapTiles will probably give you shorter loading times, but it means you need to re-build them if you change the text files (and it makes it harder for others to change things - which could be good or bad). Computer viking fucked around with this message at 23:35 on Feb 10, 2012 |
# ? Feb 10, 2012 11:50 |
|
gwar3k1 posted:I didn't know if this was too basic for the game thread or not. If I have a tool that reads text files and converts characters to a MapTiles collection, do I keep those text files as part of my solution or do I store the final MapTiles collection in some manner? Store them unserialized and make running the serialization tool part of your normal build process.
|
# ? Feb 10, 2012 15:35 |
|
Awesome, I appreciate both your input.
|
# ? Feb 10, 2012 19:12 |
|
Double post for new question. If I don't set a property because the value doesn't meet my requirements (an array of particular length and types), am I supposed to inform the user somehow?
|
# ? Feb 11, 2012 17:29 |
|
I would at least have some way for a user to find out, like a log somewhere. Just don't throw a dialog up.
|
# ? Feb 11, 2012 19:18 |
|
I programmed a finite element program not too long ago. I'm trying to make it so it can read Abaqus input files. They look a little something like this *NODE name=nodeList, otheroptions=... 1 1,2,3 2 4,5,6 . . . *NSET name = base, instance = blah 1 2, 3, 4, 5, 6, 7 Now what these things do isn't too important. I just need help in organizing how to import everything. But, *NODE is a list of nodes in my program, and *NSET is a set of nodes that will need to be used in the program later. I scan through my text file and have it stop when it finds a * statement. Then I have a switch statement that will look at whether its *NODE or *NSET or whatever. I then make a NODE object or a NSET object. What is the best way to parse the string "*NODE name=nodeList, otheroptions=..." and be able to give that object its options. Also, if I have multiple NSET objects, and you don't know how many while you're going through the input file, what is the best way to recall them. Later in the input file it will call for certain nsets by their name= parameter. Would it be best to make a linked list and then just one by one check all the nsets names until I find it? Or, is there a better data structure? Sorry if this is a little vague. It's kind of hard to explain. I'm just not used to programming for input files. That are semi complex.
|
# ? Feb 12, 2012 01:14 |
|
|
# ? May 15, 2024 10:53 |
|
I need a way to read an array from a text file(just a bunch of numbers separated by commas) and then format it, like, putting every 3 elements inside brackets and stuff like that, is there an easy way to do something like that?
|
# ? Feb 12, 2012 07:08 |