|
MSDN on Math.Round(Double) posted:If the fractional component of a is halfway between two integers, one of which is even and the other odd, then the even number is returned. Note that the method returns a Double type rather than an integral type.
|
# ? Jan 28, 2009 19:31 |
|
|
# ? Jun 4, 2024 00:06 |
|
Vanadium posted:^^^ Apparently there is an extra parameter to let you control that behaviour? http://msdn.microsoft.com/en-us/library/ef48waz8.aspx I tried using MidpointRounding.AwayFromZero, but it gives incorrect results as well, but in different places. I don't know what the gently caress is wrong with Math.Round, but it seems to work properly without the AwayFromZero argument UNLESS it tries to round a value with .5 at the end.
|
# ? Jan 28, 2009 19:34 |
|
I would probably just do something silly like (int) (val * 100 + 0.5) / 100.0, but then again, no one is paying me to write decent C#.
|
# ? Jan 28, 2009 19:37 |
|
dealmaster posted:I don't know what the gently caress is wrong with Math.Round, but it seems to work properly without the AwayFromZero argument UNLESS it tries to round a value with .5 at the end. You could try reading the MSDN page about it? It pretty clearly tells you what's going on.
|
# ? Jan 28, 2009 20:36 |
|
dealmaster posted:I'm working in C# and am having a bitch of a time with rounding numbers properly. Apparently Math.Round(4.5) returns 4, which is loving mindboggling and I can't understand why it would do such a thing. Does anyone know of a quick, easy way to round the way the rest of the world rounds so that calling Math.Round(4.5) returns 5? Essentially it's rounding down with a decimal of 0.5 instead of rounding up. I have never seen this behavior before. Are you working with single-precision floats? Because if so, 4151.385 becomes 4151.3848 after converting it to the IEEE-754 representation. Rounding that down would properly become 4151.38, since 4151.3848 < 4151.385.
|
# ? Jan 28, 2009 20:48 |
|
Ugg boots posted:Are you working with single-precision floats? Because if so, 4151.385 becomes 4151.3848 after converting it to the IEEE-754 representation. Rounding that down would properly become 4151.38, since 4151.3848 < 4151.385. No, they're double-precision floats.
|
# ? Jan 28, 2009 21:20 |
|
I'm still working on this and am trying to get Math.Round to cooperate using the MidpointRounding arguments. Anyways, I'm trying to figure out why this is happening: double x = 9076.095; x = Math.Round(x, 2, MidpointRounding.AwayFromZero); This should store 9076.1 into x, but it doesn't, it ends up putting 9076.09 into it. What is going on here, do you guys have any advice? I'm really at a loss here.
|
# ? Jan 28, 2009 23:19 |
|
dealmaster posted:I'm still working on this and am trying to get Math.Round to cooperate using the MidpointRounding arguments. Anyways, I'm trying to figure out why this is happening: Rounding to positions other than the normal decimal point is normally done by multiplying by a multiple of 10, rounding to the decimal point, then dividing. 9076.095 * 100 is 907609.49999999 with doubles, which rounds to 907609. If you can get away with fixed-point, it's a much better way to handle situations like this. ShoulderDaemon fucked around with this message at 23:25 on Jan 28, 2009 |
# ? Jan 28, 2009 23:23 |
|
ShoulderDaemon posted:Rounding to positions other than the normal decimal point is normally done by multiplying by a multiple of 10, rounding to the decimal point, then dividing. Yeah, that's true. I tried this and got the right result, so even though is hideous, I'll probably end up using it because it works: double x = 9076.095; x *= 10; x = Math.Round(x, 1, MidpointRounding.AwayFromZero); x /= 10; I guess I'll have to do this, but I've got about 30 or 40 rounding calls in my code (thanks to our actuaries), so this could end up getting messy.
|
# ? Jan 28, 2009 23:30 |
|
dealmaster posted:Yeah, that's true. I tried this and got the right result, so even though is hideous, I'll probably end up using it because it works: Keep in mind that your technique may or may not work, depending on what numbers are passing through it. If you always need to be accurate to two digits, you should really just be storing everything always multiplied by 100 and always do your rounding at the decimal point. Floating point math is designed for floating point situations; integer math is more appropriate for fixed point situations.
|
# ? Jan 28, 2009 23:34 |
|
ShoulderDaemon posted:Keep in mind that your technique may or may not work, depending on what numbers are passing through it. If you always need to be accurate to two digits, you should really just be storing everything always multiplied by 100 and always do your rounding at the decimal point. Floating point math is designed for floating point situations; integer math is more appropriate for fixed point situations. I would have stored everything multiplied by 100, but I wasn't told about the accuracy to 2 decimal places until after I had already written the software. In my original code, I just kept all the decimal places and rounded at the very end, which is what made sense to me, but apparently my software has to match this spreadsheet tester that someone wrote, which rounds at every single step. Hence the situation I'm in now, I have to throw in rounding calls all over the goddamn place.
|
# ? Jan 28, 2009 23:40 |
|
Perhaps you should be using decimal instead of double?
|
# ? Jan 28, 2009 23:56 |
|
Avenging Dentist posted:Perhaps you should be using decimal instead of double? gently caress me, I asked myself "what is a decimal data type" when I saw this post and then I noticed this little explanation on the MSDN: "Compared to floating-point types, the decimal type has more precision and a smaller range, which makes it appropriate for financial and monetary calculations." God dammit.
|
# ? Jan 29, 2009 00:05 |
|
So I am doing an astrophysics simulation, and I need to resize some images. What I need to do is take a bunch of images that are of various sizes (up to the largest 2816x2816) and reduce them all to the same resolution as the smallest, 750x750. All of these images are arrays of floating-point flux values. So basically, I need to change the resolution in such a way that preserves the flux density; i.e. if I had the array [[1,1,1,1],[1,1,1,1],[2,2,2,2],[2,2,2,2]] and resized it to 2x2 it would be [[4,4],[8,8]]. In the language I am using, IDL (a math-based language that is made to be good at handling large arrays such as these images), there is a built-in function that can resize images using this method, but only if the final image resolution (in both directions) is an integer multiple of the original, or vice versa. The only way to use it then, is if it goes through an intermediary image of a size that is a multiple of both resolutions. If you are following closely, you will have realized that this means having an intermediary array of 750^2*2816^2*(8 bytes [floating point size in IDL]) = 32 terabytes, which NEEDLESS TO SAY does not fit on my server's RAM. I then thought, well, no need to do both dimensions at once; but doing the first dimension would requre that same 32 terabytes /750, or 45 GB. Closer, but still not doable. So instead I tried programming my own code, which does a brute-force method that is similar to the method the other function would use, except I calculate each pixel of the intermediary image one at a time and add it to the new image, so that it does not use more than (750^2+2816^2+1)*(8 bytes) RAM storing the image data at one time. Unfortunately, that means at least two nested loops of lengths up to 750*2816, and this takes an abominally long time to run (by my calculations based on some trial runs, about 70 processor-days for the largest image; IDL is not so good with loops). Now, you may have noticed that 750*2816 is not the smallest number that is divided by both 750 and 2816; so my first question is, how does one go about calculating a least common denominator quickly and efficiently (when I began coding I had hoped that IDL would have a built in function for it, but no such luck)? edit: I found a way to calculate it, but it would still require about 22 GB for a 1056000x2816 floating point array; apparently the only common factor of 2816 and 750 is a measly 2. And even if I had that much RAM, IDL can't handle arrays of that size. My second question is, is there some other way to do this that isn't so brute-force? Hopefully someone with more CS experience than me knows of a cleverer way to do this. Thanks in advance. edit: By combining my code with the LCD code and doing 1 dimension at a time, by my calculations I can reduce the runtime to a few minutes per map. It's running right now, so we'll see. edit again: nope. too slow. edit again: \/\/ I am not entirely sure what you are doing, but I'll try it and see. DontMockMySmock fucked around with this message at 18:17 on Jan 30, 2009 |
# ? Jan 29, 2009 22:29 |
|
DontMockMySmock posted:astrophysics simulation Write a function with the following input: an array of squares n wide and n tall (and corresponding fluxes), and given a square whose bottom left corner is (px,py) and whose top right corner is (px+h, py+h). And let its output be the flux that passes through that square. Maybe represent px, py, and h as rational numbers (pairs of integers) or represent them more cleverly. First, write that function. Then call it for each (px,py) pair that corresponds to a pixel on the resized image. h should (obviously?) be n/750 if you're resizing to 750 by 750 pixels. Your function will have to cut up squares and take part of their flux. It could be made more efficient if it didn't have to recompute its calculations for every output pixel and only had to calculate vertical splits once each row. Edit: I believe this is the algorithm you're looking for (here written in uncompiled, untested C#): code:
shrughes fucked around with this message at 03:29 on Jan 30, 2009 |
# ? Jan 30, 2009 02:42 |
|
Well, I figured it out eventually. Shrughes, although I couldn't understand your code, your description of it got me thinking and I came up with this method: Imagine the old map as a buncha squares and a temporary map of a different resolution in one direction (1 dimension at a time simplifies things) as a buncha rectangles overlaid on the old map. For each column of the old map, do this loop: first, check if the column is entirely contained in a single column of the temporary map. If so, add its flux, row by row, to that column of the temp map. If not, calculate which two columns it is in, and what percentage it is in each. Loop through all the rows and add the appropriate percentage of the old map's value to the appropriate column of the temp map. When it's all done, repeat on rows instead of columns, but instead of going from old map -> temp map, do temp map -> new map. For the curious, this is the code I am using; it runs 2816x2816 -> 750x750 in about thirty seconds. It's IDL code, so if you are not familiar with IDL (likely unless you're an astrophysicist) you'll kinda have to ignore the parts you may not know the syntax for. I find the language is pretty straightforward, personally. It's not commented because I am a terrible person. code:
|
# ? Jan 30, 2009 19:56 |
|
This question came up after my friend showed me a programming test he got while applying for jobs a few weeks back. Is there a non-slow-as-hell way to find the maximum choice of pairs given a weight (okay, that doesn't make any sense, following diagram should help).code:
The obvious way to do this is to permute all possible sets of valid pairings and calculate which set has the best result. This is very slow of course. Is there a better way?
|
# ? Jan 30, 2009 22:28 |
|
wrok posted:The obvious way to do this is to permute all possible sets of valid pairings and calculate which set has the best result. This is very slow of course. Is there a better way? Well, there's only six combinations, so it can't be that slow. But assuming you're talking about the general case, you can do some prunings by keeping track of the best result you've gotten so far, and each time you take a possible assignment from the list of remaining assignments, do a trivial check of "does the current value, plus the maximum possible value from each row remaining to be assigned, exceed the best value I've seen?" and similarly for each remaining column. You can also heuristically always choose the assignment which adds the most immediate value. So, in literate Haskell... code:
ShoulderDaemon fucked around with this message at 23:43 on Jan 30, 2009 |
# ? Jan 30, 2009 23:24 |
|
dealmaster posted:I'm working in C# and am having a bitch of a time with rounding numbers properly. Apparently Math.Round(4.5) returns 4, which is loving mindboggling and I can't understand why it would do such a thing. Does anyone know of a quick, easy way to round the way the rest of the world rounds so that calling Math.Round(4.5) returns 5? Essentially it's rounding down with a decimal of 0.5 instead of rounding up. I have never seen this behavior before. Edit: Didn't realize this had been answered a while back. Just in case you can't use the custom parameter suggested earlier. You could use the following formula: code:
chocojosh fucked around with this message at 07:28 on Jan 31, 2009 |
# ? Jan 31, 2009 07:26 |
|
Not really sure if this goes here, but what the hell: What's the best way to do a menu for a web-page? I am by no means a web programmer, but I do some html coding for an animal shelter. The menu has about a dozen items and each html page has a table with some css formatting. If I add a page, or change the menu, I have to do it in every single html file, which is really annoying as there are around 30 of them. What is recommended for such a task?
|
# ? Feb 1, 2009 03:38 |
|
NickNails posted:Not really sure if this goes here, but what the hell:
|
# ? Feb 2, 2009 02:19 |
|
Does anybody here have experience with neural networks. I'm getting into them by throwing myself at the Encog toolkit online. I'll take any recommendations for good Java neural network toolkits, but this one seemed to look polished--excepting that it seems to be paired with a guy pimping his book. (BTW I got the older edition of his book from the library which uses something else. Drats) I'm reading that the convention for all the signals is a value from 0.0 to 1.0. I'm wondering if it handles in betweens pretty well. Say, my preferred output is something more analog; it would move around in that range without being committed to one extreme or the other. The inputs are similar. Are there neural networks suitable for those kinds of continuous signals?
|
# ? Feb 2, 2009 07:57 |
|
Rocko Bonaparte posted:Does anybody here have experience with neural networks. I'm getting into them by throwing myself at the Encog toolkit online. I'll take any recommendations for good Java neural network toolkits, but this one seemed to look polished--excepting that it seems to be paired with a guy pimping his book. Why are you looking at a neural network? There are relatively few situations where it is a good choice. Are you familiar with how neural networks work? You need to get them to learn node values by feeding them training data. Whether you get a network with good weights for your working data sets is part planning, part fiddling, and part luck.
|
# ? Feb 2, 2009 08:42 |
|
quadreb posted:Why are you looking at a neural network? There are relatively few situations where it is a good choice. Are you familiar with how neural networks work? You need to get them to learn node values by feeding them training data. Whether you get a network with good weights for your working data sets is part planning, part fiddling, and part luck. I have a large training set, but these values range from extreme negative numbers to extreme positive numbers. From what little I read, it sounds like neural networks are generally bound to the 0.0 to 1.0 range, but I don't know if they handle values in between that range well. I've normalized my inputs to conform to that 0.0-1.0 range; the lowest value in the input is 0.0 and the highest is 1.0. My normalized data looks, for example, like a normal distribution that's hanging around a value in the middle like 0.64. The stuff I'm reading so far talks about threshold values which makes me think the neurons are meant to be binary. All the examples I've seen so far are binary sets where it's either 0 or 1, and the output is only really expected to be at one of the two extremes. I can't tell yet if that's just my luck or if that's how one has to use them. If they work basically like double-precision boolean values, then I really have to cook my training data to work in a binary manner. I can think of ways of cooking the data further but I don't want to bang at it without have confidence that's what I even have to do. Alternately if I had non-binary training data, what would be an alternative to a neural network?
|
# ? Feb 2, 2009 19:15 |
|
Rocko Bonaparte posted:My normalized data looks, for example, like a normal distribution that's hanging around a value in the middle like 0.64. The stuff I'm reading so far talks about threshold values which makes me think the neurons are meant to be binary. All the examples I've seen so far are binary sets where it's either 0 or 1, and the output is only really expected to be at one of the two extremes. I can't tell yet if that's just my luck or if that's how one has to use them. Huh, that explains it...I was getting very confused as to why you kept saying "I don't know if they handle values in between [0 and 1] very well," because that's the entire point of neurons in a neural net. Neurons are supposed to yield a response over the entire range between 0.0 and 1.0, inclusive, indicating the probability of a given decision. Apparently it's just your bad luck to have only found nets that deal with "Yes" or "No" outputs thus far.
|
# ? Feb 2, 2009 19:41 |
|
So far with the normalized data I gave it, it just likes to spit out 1.0. Before I normalized and was out of the 0.0...1.0 range I was getting 0.0. So I haven't seen something--whether my own faulty code or examples--that produces outputs that are between, say, 0.1 and 0.9. Update: After reading some other sites I think I understand my (first) problem. I have three layers, with the first layer having a neuron per input, the second 1.5x the number of input neurons, and the output layer as 1 neuron. I believe that output layer is the problem. If each neuron is either fired or not, then I'm basically looking at a binary output, right? When I get a chance later tonight I'll try 2 neurons on the output and see if I get any kind of change. If I do then I guess I'll have to increase the number of neurons, and I should probably do that across all layers. Rocko Bonaparte fucked around with this message at 20:18 on Feb 2, 2009 |
# ? Feb 2, 2009 19:47 |
|
wrok posted:This question came up after my friend showed me a programming test he got while applying for jobs a few weeks back. Is there a non-slow-as-hell way to find the maximum choice of pairs given a weight (okay, that doesn't make any sense, following diagram should help).
|
# ? Feb 2, 2009 19:56 |
|
Rocko Bonaparte posted:So far with the normalized data I gave it, it just likes to spit out 1.0. Before I normalized and was out of the 0.0...1.0 range I was getting 0.0. So I haven't seen something--whether my own faulty code or examples--that produces outputs that are between, say, 0.1 and 0.9. Since there's no code or anything to explain this (hint hint), I'm going to say that your neural net isn't trained nearly well enough, or the algorithms assigning weights to each input axon are being easily overloaded.
|
# ? Feb 2, 2009 22:04 |
|
In a C Shell script, how do I output stdout and stderr to a text file (appending it) and display it in the terminal at the same time? For my program create_images I had create_images | tee -a log.txt but that gave me some kind of weird looping thing where it had an error even though the program itself was fine edit: googling around, I guess I'm running my create_images program, and the error is that I don't have any other inputs. But then what am I supposed to put in the beginning? mistermojo fucked around with this message at 02:21 on Feb 3, 2009 |
# ? Feb 3, 2009 02:14 |
|
csammis posted:Since there's no code or anything to explain this (hint hint), I'm going to say that your neural net isn't trained nearly well enough, or the algorithms assigning weights to each input axon are being easily overloaded. It does look promising but I have to prepare my data better. Currently I normalize by mapping the max/min value range to the 1.0-0.0 window. The data goes positive and negative, and the crossing point should probably be 0.5, but right now it tends to focus around a mean that has drifted from that. It'll take me some time but at least now it's starting to behave in manners I would expect, so I can see what I can make of it. Anyways regarding the code, I was using the encog Java library combined with a CSV file data set with about 155,000 rows; the source code I wrote was short but non-descriptive. So I was trying to ask more general questions before trying to splat code out there.
|
# ? Feb 3, 2009 05:11 |
|
Any suggestions on a good and extensive book about data structures? I'm finishing my bachelor in CS and looking to get a good revision (and expansion) on the field before I move on to graduate studies, and the textbook we got for our undergrad course was a bit crappy. Ideas?
|
# ? Feb 3, 2009 09:19 |
|
TagUrIt posted:I think you can modify the Hungarian Algorithm a little bit to get it finished. To maximize, I'd first find the maximum value in the array, and set C_ij = MAX - C_ij for all i and j. Then do the algorithm. Awesome, I think that's probably the 'answer' they were looking for. My friend submitted his step-through-with-O(n!) answer and they said it "wasn't quite what they were looking for". Kind of a dumb test, but neat algorithm! Thanks!
|
# ? Feb 3, 2009 16:30 |
|
shodanjr_gr posted:Any suggestions on a good and extensive book about data structures? I'm finishing my bachelor in CS and looking to get a good revision (and expansion) on the field before I move on to graduate studies, and the textbook we got for our undergrad course was a bit crappy. Ideas? CLRS is a good choice. Of course, you can't discuss data structures without covering algorithms as well. Scaevolus fucked around with this message at 02:31 on Feb 4, 2009 |
# ? Feb 4, 2009 02:29 |
|
Scaevolus posted:CLRS is a good choice. Came to recommend this.
|
# ? Feb 4, 2009 03:27 |
|
I like #cobol on synirc. Mostly because it's one of those channels that has active people at all hours and while I don't really contribute anything I can sometimes learn just by reading other conversations going on. Anyway, I got banned. Apparently because I get disconnected often? Or maybe because I "whine" about work stuff (I have a habit of sharing things that happen at work that I find funny and/or stupid). The other day someone flipped out at me because I apparently get disconnected from the synirc network a lot. When he flipped out I RDC'd into my home machine and checked the status and it said I was disconnected about an hour previously and then 12 hours before that, but only from synirc. I suppose I can drop more often at work, usually because I switch to other pppoe accounts to test, or switch to wifi sometimes, etc. Why someone would flip out about this I have no idea. People in there have helped me out a few times solving some javascript issues, and I've learned things I didn't previously know, it just sucks that 1 moron can flip his wig at me over a little stupid thing and then someone else bans me as a result.
|
# ? Feb 4, 2009 14:28 |
|
ScaryFast posted:I like #cobol on synirc. Mostly because it's one of those channels that has active people at all hours and while I don't really contribute anything I can sometimes learn just by reading other conversations going on. That guy is a loving rear end in a top hat. You should get him banned instead for being a little bitch. edit: Lets keep this thread on track with programming questions. This is a good thread.
|
# ? Feb 4, 2009 15:01 |
|
Hybridfusion posted:That guy is a loving rear end in a top hat. You should get him banned instead for being a little bitch. Came to recommend this.
|
# ? Feb 4, 2009 15:04 |
|
mistermojo posted:In a C Shell script, how do I output stdout and stderr to a text file (appending it) and display it in the terminal at the same time? for original issue, create_images 2>&1 | tee -a log.txt is a somewhat common idiom. This sends stderr to the same file descriptor as stdout (that tee is already reading). For your followup, create_images is likely waiting for data on stdin -- are you supposed to invoke it with any positional arguments, or feed it data via stdin? a guess about your symptom and how your app wants input: code:
|
# ? Feb 4, 2009 15:07 |
|
covener posted:for original issue, create_images 2>&1 | tee -a log.txt is a somewhat common idiom. This sends stderr to the same file descriptor as stdout (that tee is already reading). Actually that would be sh. In csh that would be create_images |& tee -a log.txt Here's a useful reference http://tomecat.com/jeffy/tttt/cshredir.html -- csh redirection is kinda tricky.
|
# ? Feb 4, 2009 15:22 |
|
|
# ? Jun 4, 2024 00:06 |
|
ScaryFast posted:IRC drama I think the problem is that you're not including your .h files correctly, can you post some code? Because maybe this is a thread for doing that and not bitching about whatever.
|
# ? Feb 4, 2009 15:54 |