Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

JawnV6 posted:

You're going to be calling the power() function multiple times. It seems like you're only focusing on the final instance it's called. You should write out explicitly what the code does, starting from the function call power(5.6,2).

That will return d * power(d, i-1) = 5.6 * power(5.6,2-1) = 5.6 * power(5.6,1)

And d should never be incremented or changed. When you take 5.6 to a power it has nothing to do with 6.6 or 4.6 or any other number. The only thing that changes from call to call is the exponent, because 5.6^n = 5.6*5.6^(n-1).

Adbot
ADBOT LOVES YOU

rolleyes
Nov 16, 2006

Sometimes you have to roll the hard... two?
The key thing to remember with a recursive method like that is that the system 'unwinds' itself all the way first and then 'collapses' back on itself to return the result. It seems a bit like you're fixated on the return statement from the first call to the power method, but that return statement is actually the very last statement which executes - after all of the other calls the method makes to itself depending on the value of i.

The if statement which also seemed to be catching you out a bit is simply there to define the 'collapse' point in the recursion - there has to be a point at which the method stops calling itself otherwise you'd have an infinite loop. With this method the recursion ends when i reaches zero, i.e. d has been multiplied i times. The return value at this point is 1.0 because that means this final call to the power function won't affect the calculation (because d*1.0 is still d).

As Eggnogium mentioned, d is never incremented. I think another thing which may be confusing you is that you're trying to find some kind of 'result' variable which in this case doesn't exist - the result is held in the return value passed by each call of the power method back up to its parent caller.


Recursion can be a headache if you don't intuitively get it (which I don't, I had to teach myself to 'get' it) so I wouldn't worry too much that you're having a hard time. As has been suggested, try writing out the execution path (or stepping through the code) to see if that helps you. If you're writing it out, remember to include the return value after each call to the power method, as that's your 'result' variable and the combination of all of the return values is what yields the final value.



edit:
Unrelated to your problem, but I imagine your prof/teacher will mention at some point that recursion is generally very inefficient when it comes to memory use. If you write out the execution of that example you'll probably understand why: the whole 'unwind then collapse' structure requires that, at the point where the recursion reaches its outer limit, the data for every call to the method is in memory at once. In contrast iterative execution is much more memory efficient but can be slower, depending on the problem being tackled.

rolleyes fucked around with this message at 23:51 on Feb 7, 2012

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!

Sab669 posted:

In my algorithms class we're going over recursion. I understand the concept, but the specific example we did I'm having a tough time reading.

[snip]
That's what he showed us, and it really confuses me. Why isn't y 1.0 when we print it?
If you think it should be 1.0 then you don't understand the concept.

Mr. Crow
May 22, 2008

Snap City mayor for life

rolleyes posted:

In contrast iterative execution is much more memory efficient but can be slower, depending on the problem being tackled.

In certain cases it can also be confusing as gently caress, rather than a simple recursive call.

oRenj9
Aug 3, 2004

Who loves oRenj soda?!?
College Slice

rolleyes posted:

Unrelated to your problem, but I imagine your prof/teacher will mention at some point that recursion is generally very inefficient when it comes to memory use.

I hope so, it would also be nice if the professor made them write iterative and recursive versions of the same algorithm just so the class can easily see why recursion is more simple to and why it is less efficient.

code:
private static double power(double d, int i) {
	double r = d;
	while( true ) {
		if ( i == 0 )  {
			r = r * 1.0;
			break;
		}
		r = r * d;
		--i;
	}
	return r;
}

private static double power(double d, int i) {
	if(i == 0)
		return 1.0;
	
	return d*power(d, i-1);
}
Sab669, does this help you out? It is the recursive function written iteratively. Notice how the operations are practically the same; take the value of d and multiplying it by d'; decrement i; when i is 0, then multiply d' by 1.0 and return d'.

Just so I don't get criticized for posting that code, I realize that the method I posted was not very efficient, it was intended to look as close to the recursive solution as possible.
private static double power(double d, int i) {
double r = d;
while( (--i) > 0 ) r = r * d;
return r;
}

darthbob88
Oct 13, 2011

YOSPOS

Mr. Crow posted:

In certain cases it can also be confusing as gently caress, rather than a simple recursive call.

Just as a f'rinstance, and one of the classic examples of recursion, Fibonacci. First we'll do an iterative version.
code:
public static int iterFib(int n) {
        int fN = 1, fNLessOne = 1, fNLessTwo = 0;
        for (int i = 0; i < n; i++) {
                fN = fNLessOne + fNLessTwo;
                fNLessTwo = fNLessOne;
                fNLessOne = fN;
        }
        return fN;
}
Moderately confusing and I know I made an error somewhere. On the other hand, when you do it recursively-
code:
public static int recFib(int n) {
        if(n == 1 || n==0) {
                return 1;
        } else {
                return recFib(n-1) + recFib(n-2);
        }
}
Much simpler and immediately understandable. On the other hand, recFib is only slightly more efficient than bubbleSort, while iterFib can be done in constant space and quadratic time.

Edit: Apparently iterFib's O(n^2). recFib is still the second most inefficient algorithm ever, after bogosort.

darthbob88 fucked around with this message at 00:36 on Feb 8, 2012

Opinion Haver
Apr 9, 2007

That implementation of recFib is actually O(F_n), where F_n is the nth Fibonacci number.

shrughes
Oct 11, 2008

(call/cc call/cc)
And the implementation of iterFib is actually O(n^2).

Opinion Haver
Apr 9, 2007

shrughes posted:

And the implementation of iterFib is actually O(n^2).

How?

shrughes
Oct 11, 2008

(call/cc call/cc)

yaoi prophet posted:

How?

Addition is O(n).

Or if we're restricting input size to log_phi(INT_MAX) then the program's O(1) anyway.

Blotto Skorzany
Nov 7, 2008

He's a PSoC, loose and runnin'
came the whisper from each lip
And he's here to do some business with
the bad ADC on his chip
bad ADC on his chiiiiip
You're using two different values for n simultaneously if we call the iterative fibonacci implementation O(n^2) (namely, once we are using the size of n, the other time the value)

baquerd
Jul 2, 2007

by FactsAreUseless

shrughes posted:

And the implementation of iterFib is actually O(n^2).

No definition of n will make this true. You have n addition operations, and (2n + 3) assignment operations for a literal total of 3n + 3 operations for input n.

Edit: oh poo poo I forgot the for loop math, better make it like 5n but it's still O(n)

baquerd fucked around with this message at 00:48 on Feb 8, 2012

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


If you want to include the cost of addition in your analysis, it's O(log(n)), not O(n). That's unreasonably picky for an intro class, though.

baquerd
Jul 2, 2007

by FactsAreUseless

ultrafilter posted:

If you want to include the cost of addition in your analysis, it's O(log(n)), not O(n). That's unreasonably picky for an intro class, though.

What do you mean? Addition is one clock cycle even if you want to be anal.

Opinion Haver
Apr 9, 2007

gently caress you for making me do this, but he's right. If you include addition then it's O(n) (where n is the input to the function), because there are O(n) additions and the k'th one will take O(k) time since the kth Fibonacci number has k * log_2 phi = O(k) digits. So the total time will be O(n^2).

Opinion Haver fucked around with this message at 00:49 on Feb 8, 2012

Blotto Skorzany
Nov 7, 2008

He's a PSoC, loose and runnin'
came the whisper from each lip
And he's here to do some business with
the bad ADC on his chip
bad ADC on his chiiiiip

baquerd posted:

No definition of n will make this true. You have n addition operations, and (2n + 3) assignment operations for a literal total of 3n + 3 operations for input n.

Integer addition is not a constant time operation except for subsets of integers (commonly, those from -2^31 to 2^31-1, 0 to 2^32-1, -2^63 to 2^63-1, etc).

If we consider n the size of the input in bits, then outside these defined ranges we require on the order of 2^n operations of order n complexity and thus have an exponential-time algorithm, or if we consider n the value of the input we have order n operations of order log_2(n) complexity for an nlogn overall complexity; however, since this n isn't really the size of the input, this is only a pseudo-polynomial time complexity.

^^ I see that I'm off in at least one of these

Blotto Skorzany fucked around with this message at 00:49 on Feb 8, 2012

baquerd
Jul 2, 2007

by FactsAreUseless

yaoi prophet posted:

gently caress you for making me do this, but he's right. If you include addition then it's O(n) (where n is the input to the function), because there are O(n) additions and the k'th one will take O(k) time since the kth Fibonacci number has k * log_2 phi = O(k) digits. So the total time will be O(n^2).

Maybe I'm forgetting my hardware (and this is all junk since this looks like java and these would then be 32 bit integers anyway), but what modern architecture will go through a clock cycle per bit even on above register size integer addition?

For what it's worth in an intro class, O(n^2) is not the answer they are looking for.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


baquerd posted:

What do you mean? Addition is one clock cycle even if you want to be anal.

That's only true if integers have a fixed number of bits. If you want to do arithmetic with integers of arbitrary size, you need time proportional to the length of their representation.

Edit: Again, this is not something that an introductory class cares about.

Blotto Skorzany
Nov 7, 2008

He's a PSoC, loose and runnin'
came the whisper from each lip
And he's here to do some business with
the bad ADC on his chip
bad ADC on his chiiiiip

ultrafilter posted:

Edit: Again, this is not something that an introductory class cares about.

Actually, it is - CLRS and DPV both go through the time complexity of arbitrary width addition and multiplication in early chapters. Recall he said this was for an algorithms class, not CS1 (although how someone got to algorithms without being familiar with recursion already I don't know. Unfamiliar with the mathematics of recurrence relations, sure, not familiar with the concept, uh...).

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
It's not uncommon to have a "intro to data structures and algorithms" course in the spring semester of freshman year. I can imagine a program that delays teaching recursion until then. That said, to me, not understanding recursion is really a problem with not understanding function calls properly, and I would certainly hope that the first course would cover that.

shrughes
Oct 11, 2008

(call/cc call/cc)

Otto Skorzeny posted:

You're using two different values for n simultaneously if we call the iterative fibonacci implementation O(n^2) (namely, once we are using the size of n, the other time the value)

No we aren't, those are the same value of n. The length of fib(n) is log_2(phi) * n + O(1). The cost of addition grows linearly through the course of the loop. So the total runtime is O(n^2).

Look Around You
Jan 19, 2009

Ok I actually have a question re: complexity pertaining to this. If you memoize the results for recFib, does it change the actual complexity of the function or does it just speed it up without actually altering the actual O(f) measurement?

I would think it doesn't actually change the complexity because of worst case costs and lookups and stuff but I honestly have no clue.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Look Around You posted:

Ok I actually have a question re: complexity pertaining to this. If you memoize the results for recFib, does it change the actual complexity of the function or does it just speed it up without actually altering the actual O(f) measurement?

I would think it doesn't actually change the complexity because of worst case costs and lookups and stuff but I honestly have no clue.

It's a pretty significant speedup. If you unwind the recursion for, say, f_20, you end up computing f_18 on one call, and f_19 on the other. But if you dig into that call for f_19, you'll see that you're computing f_18 again. That's where memoization really helps.

raminasi
Jan 25, 2005

a last drink with no ice
Am I really in the minority here in assuming that the pseudocode indicated fixed-width integers?

Look Around You
Jan 19, 2009

ultrafilter posted:

It's a pretty significant speedup. If you unwind the recursion for, say, f_20, you end up computing f_18 on one call, and f_19 on the other. But if you dig into that call for f_19, you'll see that you're computing f_18 again. That's where memoization really helps.

Oh, yeah I know that it'd speed it up, I'm just wondering if it affects if the algorithm is still O(2^n)

Computer viking
May 30, 2011
Now with less breakage.

GrumpyDoctor posted:

Am I really in the minority here in assuming that the pseudocode indicated fixed-width integers?

Well, you're at least not alone. I guess the question boils down to "what's the complexity of this in the typical implementation" vs "in the general case".

edit: As for the memoization, hmm. I can't really be bothered to work it out right now (I'm at work and should really be doing something else), but I have a feeling it will get it down below O(Nē). I did some empirical testing, though. Given a simple python implementation of the two variants and some timing, I got absolutely beautiful timing results for the recursive case, closely approximating T(n) = T(n-1)+T(n-2), starting around n=4.
code:
0: 0.02
1: 0.02
2: 0.06
3: 0.10
4: 0.18
5: 0.29
6: 0.51
7: 0.80
8: 1.31
9: 2.21
10: 3.50
11: 5.67
12: 9.19
13: 14.68
The memoized variant was harder to test, since I ran into the recursion limit around n=1900. It's also got a ... less obvious shape. Plotted, it looks like this: The black line is the time, the red line is just a straight line from T(0) to T(max(n)), the dots is the difference between them (arbitrarily rescaled; the absolute values aren't important), and the blue line is the 0-axis for the dots.
.
N on the X axis, time on the Y axis. The idea was that if the actual function was e.g. O(N + smallConstant * Nē), subtracting out the O(N)-component would make it easier to see the shape of the rest ... but I'm not sure how much that helped.

edit2: Of course, this is as much a test of the dictionary class in python as anything else. The code I used was this:
code:
import timeit

def fibRec(i):
	if (i==0 or i==1):
		return 1
	else:
		return fibRec(i-1) + fibRec(i-2)

def fibRecMemo(i, cache):
	if i in cache:
		return cache[i]
	if i==0 or i==1:
		return 1
	res = fibRecMemo(i-1, cache) + fibRecMemo(i-2,cache)
	cache[i] = res
	return res


def runFibRec(i):
	cache={}
	return fibRecMemo(i,cache)

rRes={}
mRes={}

for i in range(100):
	rTime = timeit.Timer("fibRec(%i)" % (i), setup = "from __main__ import fibRec")
	mTime = timeit.Timer("runFibRec(%i)" % (i), setup = "from __main__ import runFibRec")
	rRes[i] = rTime.timeit(number=100000)
	mRes[i] = mTime.timeit(number=100000)
	print("%i: %.2f / %.2f" % (i,rRes[i], mRes[i]) )
edit3: Having thought about it for a bit, etc. Assume that the recursive calls are done with the n-1 first. The first chain of recursion to hit the "bottom" will have cached all the relevant values, so all the (n-2)-calls will just return a cached value. This effectively reduces the problem to O(C*n), where C is the cost of two function calls, the addition, a cache insertion, and a cache lookup. If you draw out the call tree for the non-memoized case, the memoization prunes everything except the leftmost depth-first branch and one right-hand call for each level.

If we operate with O(1) addition and memoization operations (e.g. fixed-width ints and a preallocated zeroed array of fixed-width cells in storage with real random access) the entire thing seems to be O(N). Given variable-width ints it's O(N*log(N)).

edit4: drat, I was supposed to be doing things with research data at work.

Computer viking fucked around with this message at 16:48 on Feb 8, 2012

JawnV6
Jul 4, 2004

So hot ...
I remember my big stumbling block to understanding recursion was the big O cost of addition.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Is the big O notation really at all necessary for understanding recursion? For the finer points of performance and memory tradeoffs, sure. But I figured out recursion long before I even heard of the big O, once I had a good real life application for it, everything just kind of fell into place somehow. I'm not trying to say I'm some genius programmer either, quite the opposite in fact :downs:

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


mobby_6kl posted:

Is the big O notation really at all necessary for understanding recursion? For the finer points of performance and memory tradeoffs, sure. But I figured out recursion long before I even heard of the big O, once I had a good real life application for it, everything just kind of fell into place somehow. I'm not trying to say I'm some genius programmer either, quite the opposite in fact :downs:

It's not necessary for understanding the concept of recursion, but for being able to compare the performance of recursive and iterative implementations of certain algorithms.

Drape Culture
Feb 9, 2010

But it was all right, everything was all right, the struggle was finished. He had won the victory over himself. He loved Big Brother.

The End.
In a general sense, what are some good ways to approach getting familiar with a code base that you did not write and the authors are unavailable? I'm inheriting a convoluted project and all the previous authors have left the school and I'm looking for general reading advice.

baquerd
Jul 2, 2007

by FactsAreUseless

ManlyWeevil posted:

In a general sense, what are some good ways to approach getting familiar with a code base that you did not write and the authors are unavailable? I'm inheriting a convoluted project and all the previous authors have left the school and I'm looking for general reading advice.

One way is to start by identifying a problem you'd like to solve, or some feature where you'd like to see how it works. Trace the code execution from the closest known point (in a standalone application this may be the entrance point), and step through the code, following tangential method calls minimally. Generate a theory on why the problem exists, if it is not apparent. Now pop open a debugger and test your theory with the program state, fix the problem and/or move on.

Repeat that a few dozen times with different areas and you will start to develop an idea of how everything fits together while possibly accomplishing goals at the same time.

Zhentar
Sep 28, 2003

Brilliant Master Genius
I would say that trying to understand a specific problem is a good guide (and a good way to accomplish something with your learning), but the important part to learning reasonably quickly is the debugger part. In my personal experience, nothing has been a more effective, time efficient way to get a solid understanding of foreign, complex code than 10-20 hours stepping through code in a debugger (even including 'read documentation' and 'ask the author' when either is available).

gwar3k1
Jan 10, 2005

Someday soon
I didn't know if this was too basic for the game thread or not. If I have a tool that reads text files and converts characters to a MapTiles collection, do I keep those text files as part of my solution or do I store the final MapTiles collection in some manner?

Computer viking
May 30, 2011
Now with less breakage.

gwar3k1 posted:

I didn't know if this was too basic for the game thread or not. If I have a tool that reads text files and converts characters to a MapTiles collection, do I keep those text files as part of my solution or do I store the final MapTiles collection in some manner?

Whichever is more convenient for you and/or the user? Storing the serialized MapTiles will probably give you shorter loading times, but it means you need to re-build them if you change the text files (and it makes it harder for others to change things - which could be good or bad).

Computer viking fucked around with this message at 23:35 on Feb 10, 2012

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



gwar3k1 posted:

I didn't know if this was too basic for the game thread or not. If I have a tool that reads text files and converts characters to a MapTiles collection, do I keep those text files as part of my solution or do I store the final MapTiles collection in some manner?

Store them unserialized and make running the serialization tool part of your normal build process.

gwar3k1
Jan 10, 2005

Someday soon
Awesome, I appreciate both your input.

gwar3k1
Jan 10, 2005

Someday soon
Double post for new question.

If I don't set a property because the value doesn't meet my requirements (an array of particular length and types), am I supposed to inform the user somehow?

pokeyman
Nov 26, 2006

That elephant ate my entire platoon.
I would at least have some way for a user to find out, like a log somewhere. Just don't throw a dialog up.

Johann Gambolputty
Jul 6, 2006
I programmed a finite element program not too long ago. I'm trying to make it so it can read Abaqus input files.

They look a little something like this

*NODE name=nodeList, otheroptions=...
1 1,2,3
2 4,5,6
.
.
.

*NSET name = base, instance = blah
1 2, 3, 4, 5, 6, 7

Now what these things do isn't too important. I just need help in organizing how to import everything. But, *NODE is a list of nodes in my program, and *NSET is a set of nodes that will need to be used in the program later.


I scan through my text file and have it stop when it finds a * statement. Then I have a switch statement that will look at whether its *NODE or *NSET or whatever. I then make a NODE object or a NSET object. What is the best way to parse the string "*NODE name=nodeList, otheroptions=..." and be able to give that object its options.

Also, if I have multiple NSET objects, and you don't know how many while you're going through the input file, what is the best way to recall them. Later in the input file it will call for certain nsets by their name= parameter. Would it be best to make a linked list and then just one by one check all the nsets names until I find it? Or, is there a better data structure?

Sorry if this is a little vague. It's kind of hard to explain. I'm just not used to programming for input files. That are semi complex.

Adbot
ADBOT LOVES YOU

cisneros
Apr 18, 2006
I need a way to read an array from a text file(just a bunch of numbers separated by commas) and then format it, like, putting every 3 elements inside brackets and stuff like that, is there an easy way to do something like that?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply