Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
POKEMAN SAM
Jul 8, 2004

rotor posted:

n&1 is a little weird, but I don't see what makes it a coding horror!!

It's at least hardware dependent. Unless it says somewhere in the C++ specification that integers have to be represented exactly as they are commonly today...

Adbot
ADBOT LOVES YOU

hexadecimal
Nov 23, 2008

by Fragmaster

Ugg boots posted:

It's at least hardware dependent. Unless it says somewhere in the C++ specification that integers have to be represented exactly as they are commonly today...

Do you mean 2's compliment instead of sign bit or something? In that case it would still work for all positive numbers.

POKEMAN SAM
Jul 8, 2004

hexadecimal posted:

Do you mean 2's compliment instead of sign bit or something? In that case it would still work for all positive numbers.

Where does it say that 00000001 is 1? What if hardware specifies that the sign bit is the least significant bit instead of the most significant bit? What if it stores the least significant bit in the highest significant bits place for integers?

Edit: Also stop PMing people and get in #cobol, it's more fun.

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh
The bigger horror is that an optimizing compiler (i.e. all of them) will convert x mod 2n to an appropriate bitwise operation.

Don't try to outsmart the compiler with micro-optimizations.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Ugg boots posted:

Where does it say that 00000001 is 1? What if hardware specifies that the sign bit is the least significant bit instead of the most significant bit? What if it stores the least significant bit in the highest significant bits place for integers?

The LSB that is the LSB is not the true LSB! Hail Discordia!

You're right in a global sense, of course, at least on some globe where you said something right.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
for the record, (n&1) is about twice as fast as (n%2) in javascript

modulus: 2237

bittwiddle: 1498

code:
<html>
<head>
<script>
function init(){
	var iterations = 10000000;
	var start = new Date();
	for(var i = 0 ; i < iterations ; ++i){
		var foo = i % 2;
	}
	var elapsed = new Date();
	document.write("modulus: "+(elapsed.getTime() - start.getTime()));
	document.write("<p>");
	start = new Date();
	for(var i = 0 ; i < iterations ; ++i){
		var foo = i & 1;
	}
	elapsed = new Date();
	document.write("bittwiddle: "+(elapsed.getTime() - start.getTime()));
}
</script>
</head>
<body onload="init()">
				butt
</body>				
</html>
I know we were talking about this in a C context, but meh

rotor fucked around with this message at 01:35 on Jan 6, 2009

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

rotor posted:

for the record, (n&1) is about twice as fast as (n%2) in javascript

Good lord, it had better be a lot faster than that. n%2 is a floating-point operation in JavaScript.

EDIT: also, JavaScript obviously doesn't have an optimizing compiler.

rjmccall fucked around with this message at 01:37 on Jan 6, 2009

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

rjmccall posted:

Good lord, it had better be a lot faster than that. n%2 is a floating-point operation in JavaScript.

iunno man, run the benchmark urself

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

rotor posted:

iunno man, run the benchmark urself

Folly of microbenchmarks, I think. Your benchmark time is probably dominated by loop and general interpretation overhead. What did you run it in?

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

rjmccall posted:

Folly of microbenchmarks, I think. Your benchmark time is probably dominated by loop and general interpretation overhead. What did you run it in?

build a better one then. Ran it in FF3 and IE6. IE has this thing where it yells at you if the script blocks too long, so those results are suspect.

narbsy
Jun 2, 2007

rjmccall posted:

Folly of microbenchmarks, I think. Your benchmark time is probably dominated by loop and general interpretation overhead. What did you run it in?

Uhhhhhhh it's the same loop.

I ran a simple benchmark in C; without any optimizations, & is faster than %. With -O3 they are essentially the same.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

narbsy posted:

Uhhhhhhh it's the same loop.

Right. So let's say operation A takes 1ms to do a million iterations in 1ms, and operation B takes 100ms to do a million iterations, and loop overhead costs you 100ms. So the benchmark takes 101ms for A and 200ms for B; voilà, a 100x difference turns into in a 2x difference. And trust me that that sort of loop overhead is not unreasonable in naive interpreters.

There are tons of articles out there about how silly this sort of microbenchmarking is, but you should at least subtract out the empty-loop overhead.

The reason I asked about the browser you were using was just curiosity (also you should always mention the host platform/configuration when reporting benchmark results); Firefox's JS implementation leapt forward in 3.1.

rjmccall fucked around with this message at 01:55 on Jan 6, 2009

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

rjmccall posted:

Right. So let's say operation A takes 1ms to do a million iterations in 1ms, and operation B takes 100ms to do a million iterations, and loop overhead costs you 100ms. So the benchmark takes 101ms for A and 200ms for B; voilà, a 100x difference turns into in a 2x difference. And trust me that that sort of loop overhead is not unreasonable in naive interpreters.

There are tons of articles out there about how silly this sort of microbenchmarking is, but you should at least subtract out the empty-loop overhead.

it's the exact same loop, but like I said feel free to make a better one. I'm sorry, but simply saying "It must be X, regardless of what the benchmarks may say" is a little too close to faith-based computing for my tastes.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Just out of curiosity, can anyone name a CPU in significant use today that doesn't use 8 bits per byte and/or twos complement to represent signed integers?

Edit: well blow me down, maybe there are a few network appliances out there that handle one's complement arithmetic because:

quote:

The IPv4 header checksum uses ones' complement arithmetic.

minato fucked around with this message at 02:10 on Jan 6, 2009

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

rjmccall posted:

The reason I asked about the browser you were using was just curiosity (also you should always mention the host platform/configuration when reporting benchmark results); Firefox's JS implementation leapt forward in 3.1.

I posted the source, run it your drat self on whatever platform you like and check the results, jeez

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
code:
<html>
<head>
<script>
function init(){
	var iterations = 10000000;
	var start = new Date();
	for(var i = 0 ; i < iterations ; ++i){
	}
	var elapsed = new Date();
	document.write("empty loop: "+(elapsed.getTime() - start.getTime()));
	document.write("<p>");

	start = new Date();
	for(var i = 0 ; i < iterations ; ++i){
		var foo = i % 2;
	}
	elapsed = new Date();
	document.write("modulus: "+(elapsed.getTime() - start.getTime()));
	document.write("<p>");


	start = new Date();
	for(var i = 0 ; i < iterations ; ++i){
		var foo = i & 1;
	}
	elapsed = new Date();
	document.write("bittwiddle: "+(elapsed.getTime() - start.getTime()));
}
</script>
</head>
<body onload="init()">
				butt
</body>				
</html>
empty loop: 1180

modulus: 3237

bittwiddle: 1884

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

rotor posted:

it's the exact same loop, but like I said feel free to make a better one. I'm sorry, but simply saying "It must be X, regardless of what the benchmarks may say" is a little too close to faith-based computing for my tastes.

That's not what I'm saying at all. I'm saying your benchmark is crap and doesn't say anything useful, so we should ignore it.

Look, I adjusted your benchmark to also post the results of an empty loop, then got stable results for a bunch of a different implementations on the machine I'm using to post this. There are some good reasons why that still doesn't capture the performance difference properly, but whatever, you don't care. All these results are on a 2.16GHz core duo, 2GB RAM, MacOS 10.5.6, fairly heavy background load.

Firefox 3.0.5: 600ms empty, 1150ms modulus, 800ms twiddle, i.e. 550ms modulus, 200ms twiddle.
Firefox Jan. 5th nightly: 45/172/54, i.e. 127ms modulus, 7ms twiddle.
Safari 3.2.1: 650/1450/1100, i.e. 800ms modulus, 450ms twiddle.
WebKit nightly: 50/163/68, i.e. 113ms modulus, 18ms twiddle.

So I see the performance difference as anything from 2x (when you would have said 1.3x) to 18x (when you would said 3x).

hexadecimal
Nov 23, 2008

by Fragmaster
Wow, you people sure got worked up over & vs. % I just hope this doesn't lead to ww3. (The axis of %)

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

rjmccall posted:

That's not what I'm saying at all. I'm saying your benchmark is crap and doesn't say anything useful, so we should ignore it.
which is why i said 'post a better one,' I'm sorry that I didn't feel it was necessary to note that the benchmark I posted was perhaps nonauthoritative, I sort of figured people would understand that. The only thing I was really trying to discover was whether one was significantly faster than the other, and by around how much.

quote:

Firefox 3.0.5: 600ms empty, 1150ms modulus, 800ms twiddle, i.e. 550ms modulus, 200ms twiddle.

2x, 3x, meh. Seems like it's in the same ballpark to me.

Good to see the other engine results tho.

rotor fucked around with this message at 07:24 on Jan 6, 2009

Steve French
Sep 8, 2003

Ugg boots posted:

Where does it say that 00000001 is 1?

Now, I agree that doing &1 to test for even/odd is silly, but this is a pretty dumb point. If you can't assume something as elementary as that, then bitwise operations quickly become nearly (completely?) useless.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Why do you consider &1 to be silly? Is it code clarity, performance, portability, or something else?

Code clarity I could understand, and same goes for performance unless we're talking about some highly-optimized graphics rendering loop, but I can't imagine anyone seriously citing portability.

narbsy
Jun 2, 2007

minato posted:

Why do you consider &1 to be silly? Is it code clarity, performance, portability, or something else?

Code clarity I could understand, and same goes for performance unless we're talking about some highly-optimized graphics rendering loop, but I can't imagine anyone seriously citing portability.

I'd consider it silly for two reasons, the first of which being that it is unclear to use a bitwise operation to determine even/oddness instead of using modulo as is convention, and also because it falls under the category of if(p), which has been mentioned as it too assumes 0 is false. The second reason is that if the code is C, as hexadecimal was thinking of, the performance benefit really isn't large. It doesn't show up much on un-optimized code, and barely at all under optimized code.

POKEMAN SAM
Jul 8, 2004

Steve French posted:

Now, I agree that doing &1 to test for even/odd is silly, but this is a pretty dumb point. If you can't assume something as elementary as that, then bitwise operations quickly become nearly (completely?) useless.

No they aren't. They only become useless if you're only doing bitwise operations on integers expecting the bitwise operations to have a result on the numerical representation. In all worlds doing 0xFF & 0x01 would equal 0x01, but not in all numerical representations does the 0x01 bit represent the same thing. What if, for example, a system only had a floating point math processor, so all integers are floating point numbers. Ignoring the precision problems, & 0x01 would not tell you if the number is even or odd, but in all representations it would give you the least significant bit.

Steve French
Sep 8, 2003

Ugg boots posted:

What if, for example, a system only had a floating point math processor, so all integers are floating point numbers. Ignoring the precision problems, & 0x01 would not tell you if the number is even or odd, but in all representations it would give you the least significant bit.

Well, considering we are talking about C, and in C bitwise operators may only be applied to integral operands, then that's another pretty bogus "what if."

It's entirely reasonable to assume that the representation of a particular positive integer value is its binary equivalent (aside from endian-ness issues)

shrughes
Oct 11, 2008

(call/cc call/cc)

Ugg boots posted:

No they aren't. They only become useless if you're only doing bitwise operations on integers expecting the bitwise operations to have a result on the numerical representation. In all worlds doing 0xFF & 0x01 would equal 0x01, but not in all numerical representations does the 0x01 bit represent the same thing. What if, for example, a system only had a floating point math processor, so all integers are floating point numbers. Ignoring the precision problems, & 0x01 would not tell you if the number is even or odd, but in all representations it would give you the least significant bit.

If I were floWenoL, I would be posting "Look how stupid you are." Bitwise anding a floating point representation with the floating point representation of 1 won't give you the least significant bit; it will give you some power of 2 or NaN. Arguing a point by bringing up these fantasmical scenarios is worthless jerkery.

Steve French
Sep 8, 2003

shrughes posted:

If I were floWenoL, I would be posting "Look how stupid you are." Bitwise anding a floating point representation with the floating point representation of 1 won't give you the least significant bit; it will give you some power of 2 or NaN. Arguing a point by bringing up these fantasmical scenarios is worthless jerkery.

Not even that. As I said above, bitwise operators don't even work with floating point operands:

"error: invalid operands to binary &"

PrBacterio
Jul 19, 2000
This whole bit-fiddling discussion seems pretty silly to me. Just look it up in the relevant language standard (ie. C89/90/99 or whatever the relevant, current C++ standard is) what the meaning of this bit-wise operations is supposed to be, and from then on it's the compiler writer's job to ensure that the appropriate semantics of the operations are respected on whatever crazy moon-architecture you're compiling your program for.

And while I've no clue about the situation as regards bit-wise operators in C++, iirc the most recent C language standards all prescribe that they operate as expected on at least the positive integers. So while I'm pretty sure that the result of an expression like (-1)>>2 is undefined*, 1<<4 or 17&1 are all well-defined expressions with semantics defined by the language standard, even if the constants involved are of a signed integer type.

*unlike ((unsigned)(-1))>>2 because the most recent C standards *require* conversions from signed to an unsigned integer types to result in a 2's complement representation, i.e. ((unsigned)(-1)) is guaranteed to be UINT_MAX.

EDIT: Oh and also to the people who posted things like
code:
return boolean_expression ? true : false;
I consider things like these to be stylistic issues. I sometimes write something like that when I don't consider the expression to be a sufficiently obvious representation of a boolean value, because to the compiler it's going to be all the same and so you write things like that for human readers. So it'd probably be better to stop arguing about stylistic issues in this thread and concentrate on the actual, unambiguous coding horrors instead. Or should I start posting things like "oh the horror there are people out there who put the opening braces on the same line as the if/loop instruction/function header, what horrible disgusting abominations shall we encounter next?"

PrBacterio fucked around with this message at 08:23 on Jan 6, 2009

ih8ualot
May 20, 2004
I like turkey and ham sandwiches
Why are we doing performance evaluations on an interpreted language?

:psyduck:

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

ih8ualot posted:

Why are we doing performance evaluations on an interpreted language?

yeah man it's javascript who cares about performance right? the only time programmers should care about performance is when they're programming games and real-time systems.

Lonely Wolf
Jan 20, 2003

Will hawk false idols for heaps and heaps of dough.
Because cycle counting is too depressing :cry:

Trabisnikof
Dec 24, 2005

ih8ualot posted:

Why are we doing performance evaluations on an interpreted language?

:psyduck:

Google does it, why can't we?

ih8ualot
May 20, 2004
I like turkey and ham sandwiches

rotor posted:

yeah man it's javascript who cares about performance right? the only time programmers should care about performance is when they're programming games and real-time systems.

I didn't mean it like that. Of course performance is important for any application you write.

I just meant that in terms of determining instruction efficiency, results from a compiled and optimized language might prove to be a better metric. But I could be wrong.

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

ih8ualot posted:

I didn't mean it like that. Of course performance is important for any application you write.

I just meant that in terms of determining instruction efficiency, results from a compiled and optimized language might prove to be a better metric. But I could be wrong.

Actually, the best metric would be results from the language you're using.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

ih8ualot posted:

I just meant that in terms of determining instruction efficiency, results from a compiled and optimized language might prove to be a better metric. But I could be wrong.


did you miss the part where there's no difference between the two in C and a big difference in javascript?

ih8ualot
May 20, 2004
I like turkey and ham sandwiches
Well, I just got served. :saddowns:

geetee
Feb 2, 2004

>;[
rotor: I just wanted to say that I missed you while you were gone. :h:

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

geetee posted:

rotor: I just wanted to say that I missed you while you were gone. :h:

I missed u too. protip: never go on vacation with small children, it sucks.

ohgodwhat
Aug 6, 2005

rotor posted:

I missed u too. protip: never go on vacation with small children, it sucks.

But what would you do if you need to feast?

heeen
May 14, 2005

CAT NEVER STOPS
GUYS






GUYS






What if the internal representation was trinary? WHAT THEN? &1 is a PORTABILITY HORROR get a grip seriously!

Adbot
ADBOT LOVES YOU

heeen
May 14, 2005

CAT NEVER STOPS

Ugg boots posted:

What if, for example, a system only had a floating point math processor, so all integers are floating point numbers.
Can you even cite one such system? Can such a system even adhere to any C standard?

Hahaha better yet, how do you define modulus for float numbers?


2.0 % 2.0 = 0
2.1 % 2.0 exception modulus on float with remainder, core dumped

heeen fucked around with this message at 15:25 on Jan 7, 2009

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply