Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
JawnV6
Jul 4, 2004

So hot ...

Dr. Stab posted:

Knowing x86 assembly is probably a downside when it comes to application development.
But that's true of MIPS and the MSP430 as well.

Adbot
ADBOT LOVES YOU

Absurd Alhazred
Mar 27, 2010

by Athanatos

Soricidus posted:

I'm terribly sorry, but if it means you will be continuing to place a space character between arrays and their opening brackets, then I am not able to grant you permission to do that.

The horror. The horror.

JawnV6 posted:

But that's true of MIPS and the MSP430 as well.

Is there a single processor where comparison to zero isn't simpler than any other comparison?

pseudorandom name
May 6, 2007

It's an optimization on x86 because both SUB and DEC set ZF as appropriate so you don't actually have to do a comparison.

ExcessBLarg!
Sep 1, 2001
Ironically it's probably slower because iterating through the array backwards breaks prefetching so you hit a bunch more cache misses.

Except the array is probably not large enough and whether it's a net optimization or inefficiency is completely irrelevant. Still, don't be too clever.

pokeyman
Nov 26, 2006

That elephant ate my entire platoon.
C code:

uint32_t bitmask = /* whatever */;

// later

bitmask &= !SOME_FLAG;

Really more of a horror that the language/compiler doesn't help you out here at all, as the author's intention is pretty clear.

Absurd Alhazred
Mar 27, 2010

by Athanatos

ExcessBLarg! posted:

Ironically it's probably slower because iterating through the array backwards breaks prefetching so you hit a bunch more cache misses.

Except the array is probably not large enough and whether it's a net optimization or inefficiency is completely irrelevant. Still, don't be too clever.

Yeah, so much optimization is done on the compiler and machine level that you should be focusing optimization on algorithms (usually you'll want to solve something in O(nlogn) rather than O(n2), if you know that you're going to be getting to high enough n!) and whatever comes up as a bottleneck in profiling.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

ExcessBLarg! posted:

Ironically it's probably slower because iterating through the array backwards breaks prefetching so you hit a bunch more cache misses.

Pretty sure modern CPUs prefetch for both directions of memory access.

ExcessBLarg!
Sep 1, 2001

Subjunctive posted:

Pretty sure modern CPUs prefetch for both directions of memory access.
Perhaps. Was trying to make a point about assumptions and needless deviation from established conventions. Those CPU guys think of everything though.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
laugh all you want, but I used to be a Flash programmer, where these things were literally required since the compiler is poop. so this brought back memories for me. the compiler also used to emit three times as many labels for a for loop as a while loop. so you were told never to use for loops if you wanted a 60fps game.

written by a super friendly and awesome guy, but still poop

other Flash facts: shorter variable names are faster since it simply uses a hashmap for its scope, and locals are stored, name and all.

Xerophyte
Mar 17, 2008

This space intentionally left blank

Absurd Alhazred posted:

Yeah, so much optimization is done on the compiler and machine level that you should be focusing optimization on algorithms (usually you'll want to solve something in O(nlogn) rather than O(n2), if you know that you're going to be getting to high enough n!) and whatever comes up as a bottleneck in profiling.

It depends on the problem domain and input size, but you shouldn't expect the compiler to optimize everything except asymptotic complexity if you actually gotta go fast. Parallelization, memory layout and vectorization all matter, to a factor of 10x to 10000x or so depending on the problem and hardware, and compilers for typical mainstream languages will not touch any of them unless you tell them exactly what you want. So while you don't have to worry about making the compiler pick the "faster" comparison before a branch to go fast, you do still occasionally need a list of vector intrinsics and willingness to smash your head against all the obscure race conditions that you introduced last week.

Fortunately for humanity, we have solved this problem by making CPUs fast well beyond the point where our inability to make very good use of them matters in almost all applications. The majority of programmers can cheerfully code for readability and alterability (or lazyness and expedience), rather than carefully encode their data in Morton order or whatever. There's no force like brute force, after all.

Zemyla
Aug 6, 2008

I'll take her off your hands. Pleasure doing business with you!

john donne posted:

code:
//Loop backwards, because comparing to zero is faster.
for (int j = customerPurchases.Count-1; j >= 0; --j)

I'm the horror too, because when I can loop in either direction in Haskell, I typically write:

code:
func 0 = done
func i = whatever $ func (i - 1)
-- later
x = func (n - 1)
instead of

code:
func i | i == n = done
func i = whatever $ func (i + 1)
-- later
x = func 0
so I don't have to write the conditional. (It still produces a conditional, but it's implicit.)

99% of the time, I just fold over [0..(n - 1)], since list fusion gives me the same code.

Linear Zoetrope
Nov 28, 2011

A hero must cook

Zemyla posted:

I'm the horror too, because when I can loop in either direction in Haskell, I typically write:

code:
func 0 = done
func i = whatever $ func (i - 1)
-- later
x = func (n - 1)
instead of

code:
func i | i == n = done
func i = whatever $ func (i + 1)
-- later
x = func 0
so I don't have to write the conditional. (It still produces a conditional, but it's implicit.)

99% of the time, I just fold over [0..(n - 1)], since list fusion gives me the same code.

That's typical with functional languages, though (or recursion in procedural languages, for that matter). We generally think of recursion as "shrinking to a base value" even when it could strictly be done the other direction.

The problem with iterating backwards with an actual for loop is like 99% that it's weird, and the less weird it is, the easier it is to understand. You can sacrifice clarity or elegance for performance sometimes, but in this case the only reason it was done was a dubious, almost certainly nonexistant performance gain.

Linear Zoetrope fucked around with this message at 08:45 on Apr 29, 2016

Soricidus
Oct 21, 2010
freedom-hating statist shill
In any case, the correct way to write the reverse loop is
code:
int j = customerPurchases.Count;
while (j --> 0)

karms
Jan 22, 2006

by Nyc_Tattoo
Yam Slacker

Suspicious Dish posted:

laugh all you want, but I used to be a Flash programmer, where these things were literally required since the compiler is poop. so this brought back memories for me. the compiler also used to emit three times as many labels for a for loop as a while loop. so you were told never to use for loops if you wanted a 60fps game.

written by a super friendly and awesome guy, but still poop

other Flash facts: shorter variable names are faster since it simply uses a hashmap for its scope, and locals are stored, name and all.

I hope you have also used that one xml reader that was written in actionscript instead of the built-in one, because the built-in one was slower.

Just Andi Now
Nov 8, 2009


Soricidus posted:

In any case, the correct way to write the reverse loop is
code:
int j = customerPurchases.Count;
while (j --> 0)

This is as awesome as it is horrifying.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)

Soricidus posted:

In any case, the correct way to write the reverse loop is
code:
int j = customerPurchases.Count;
while (j --> 0)

This is what I was thinking, decrementing has the benefit of not requiring the allocation of another register, if it can be overwritten. Another perk is that if you're fastcalling, it is already there!

The end of 3.7.2 of the intel optimization manual has the answer to the prefetching question, but I'm on my phone so I can't quote it. Also you can use the prefetch instruction.

dougdrums fucked around with this message at 14:25 on Apr 29, 2016

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

ExcessBLarg! posted:

Ironically it's probably slower because iterating through the array backwards breaks prefetching so you hit a bunch more cache misses.

Backwards memcpy is actually (very slightly) faster than forwards on a bunch of Intel CPUs.

john donne
Apr 10, 2016

All suitors of all sorts themselves enthral;

So on his back lies this whale wantoning,

And in his gulf-like throat, sucks everything

That passeth near.

Soricidus posted:

In any case, the correct way to write the reverse loop is
code:
int j = customerPurchases.Count;
while (j --> 0)

I'm going to use this in all of my refactor PRs from now on

Also as you probably could have predicted, that collection was indeed like 8 items long, and that loop was found in one of the worst-architected applications I've ever seen.

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

feedmegin posted:

Qt is pretty good about binary compatibility within each major version, actually, though it does take some effort internally.

Not too much effort, it's a well known pattern called pimpl. Qt has slight complications with pimpl related to some uses of its event dispatching system, but it's all mostly painless

Cuntpunch posted:

Excuse me you forgot to extend(or Implement) Vertebrate, so all your mammals are going to have broken movement methods.

This is what the entity-component-system model was born for

hackbunny fucked around with this message at 02:43 on Apr 30, 2016

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

hackbunny posted:

Not too much effort, it's a well known pattern called pimpl.

I'm the indirect branch on every everything.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
isn't that what link-time optimization is for? can't most linkers inline simple functions like getters?

Captain Cappy
Aug 7, 2008

Suspicious Dish posted:

isn't that what link-time optimization is for? can't most linkers inline simple functions like getters?

Pimpl means that you're using dynamic memory allocation everywhere and therefore you're less likely to have the object in your memory cache. Even if the linker inlines the getter it still has to get the info from memory. Linked-lists are slower than vectors (ArrayList) for the same reason that every new linked list item is at a new place in memory instead of right nearby.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
pimpl's memory impact isn't worse than just using a lot of reference types. it can obviously be taken to ridiculous extremes like separately allocating Point2Ds but if you just use it for large-ish types and/or at boundaries that are polymorphic anyway (i.e. already indirected and separately allocated) it's not a noticeable penalty and can actually improve locality vs. inlining the storage of everything

i have no idea what subjunctive is saying, pimpl introduces an extra level of call but it's not indirect. lto will kill that neglible extra indirection when you're using the pimpl interface internally, but a lot of times you're using pimpl at library boundaries where lto won't help / isn't a performance bottleneck anyway. like i just wrote a pimpl library (for lldb's use) for looking up swift ast types from remote type metadata, there is really no plausible use of this library where a tiny issue delay in the pimpl thunk is going to show up vs. all the time it spends processing memory and performing i/o with the inferior process. that is how you are supposed to use pimpl

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Captain Cappy posted:

Pimpl means that you're using dynamic memory allocation everywhere

as opposed to using stack allocation for long-lived objects? what?

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

rjmccall posted:

i have no idea what subjunctive is saying, pimpl introduces an extra level of call but it's not indirect. lto will kill that neglible extra indirection when you're using the pimpl interface internally, but a lot of times you're using pimpl at library boundaries where lto won't help / isn't a performance bottleneck anyway.

Sorry, yes, I'm referring to the extra indirection and the hand-wringing that I have seen accompany it. I am not seriously advocating against pimpl on the basic of that cost.

Captain Cappy
Aug 7, 2008

Suspicious Dish posted:

as opposed to using stack allocation for long-lived objects? what?

Not necessarily stack allocation I guess, but the issue is that in the following example, your class's two objects are nowhere near eachother in memory. And yeah, this isn't a big deal for stuff like GUI components or other classes that there aren't a lot of, but it will ruin your cache hits if you put them in an array or vector.

code:
class MyClass
{
  PimplClass1 object1; // Nowhere near object2 in memory.
  PimplClass2 object2; 
}

Soricidus
Oct 21, 2010
freedom-hating statist shill
e: misread you

Soricidus fucked around with this message at 16:33 on Apr 30, 2016

Cuntpunch
Oct 3, 2003

A monkey in a long line of kings
Out of nowhere in the past week, our lackluster contractor has started commenting like this *everywhere*
code:
public void Foo()
{
	if(true)
	{

	}//end if
}//end Foo
Will write ridiculously tangled code without explanation, but by goddamn will they ever explain what a close-brace closes :smithicide:

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
That's actually a trademark of decompilers and other tools like that. Are you sure he's not stealing code?

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
I've done that sometimes manually when I'm trying to cope with some ungodly huge function and it really is that hard to figure out what the close-curly corresponds to. Doing it everywhere? Nnnnnot so much.

necrotic
Aug 2, 2005
I owe my brother big time for this!
use code folding in those situations.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
Yeah, I probably should do that, but the added annoyance helps encourage me to refactor the code in question, so it balances out. :v:

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Suspicious Dish posted:

That's actually a trademark of decompilers and other tools like that. Are you sure he's not stealing code?

I have met multiple human beings who habitually comment the ends of blocks (and in languages where the equivalent of "decompilation" would be obvious for reasons other than weirdo commenting, so I'm confident that isn't it).

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



BobHoward posted:

I have met multiple human beings who habitually comment the ends of blocks (and in languages where the equivalent of "decompilation" would be obvious for reasons other than weirdo commenting, so I'm confident that isn't it).

Cuntpunch said he started out of nowhere, which points to it not being that insipid habit.

JawnV6
Jul 4, 2004

So hot ...

BobHoward posted:

I have met multiple human beings who habitually comment the ends of blocks (and in languages where the equivalent of "decompilation" would be obvious for reasons other than weirdo commenting, so I'm confident that isn't it).

Yeah, that's all over the place in HDL's. I'm not sure if that was a stylistic choice or just necessary because of bad tools and giant code blocks.

Cuntpunch
Oct 3, 2003

A monkey in a long line of kings

Suspicious Dish posted:

That's actually a trademark of decompilers and other tools like that. Are you sure he's not stealing code?

Yes, because I didn't include the gross tendency towards typos that, kindly, would be called the obliteration of the english language in my example comment :v:

fritz
Jul 26, 2003

BobHoward posted:

I have met multiple human beings who habitually comment the ends of blocks (and in languages where the equivalent of "decompilation" would be obvious for reasons other than weirdo commenting, so I'm confident that isn't it).

If I have to have deeply-nested things, like 3-d for loops, I will absolutely write comments like "// end for(i)".

Linear Zoetrope
Nov 28, 2011

A hero must cook
I was having a weird issue with matplotlib graphing the same graph twice, eventually I found the problem:

code:
def my_func(x):
	y.append(1)

if __name__ == "__main__":
	y = []
	z = []

	my_func(z)

	print(y)
	print(z)
Prints

[1]
[]

I had refactored the plotting code into a function because I added the second graph, but I forgot to change the name of the variable to the argument. Mumblegrumble Python scoping idiosyncracies, I would've expected an error.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
The lack of block scoping and the lack of true multithreading are my biggest peeves with Python, and I'm honestly not certain which of the two is worse.

Adbot
ADBOT LOVES YOU

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Threads are terrible, if that helps.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply