Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
shodanjr_gr
Nov 20, 2007

Subjunctive posted:

It can happen in a lot of different places. Any event in the queue could be holding the last reference, a network timeout could drop it, replacement in a cache, script calling a general purpose DOM method, etc. Many places in complex systems interact with base classes, and are hard to "know well enough" to make predictions about all the places something big can go synchronously dead on the main thread. (And what if you can predict them? What do you do about it? Reinvent the sweep phase?)

With ref-counting you are generally much more cognizant of the lifetime of your objects however so you can have at least some notion of where the destructors are going to trigger and you can (if you care) structure your software around that. Whereas with GC, my understanding is that you have very little control (other than forcing a GC pass manually while keeping track of memory allocations which I believe was a recommended strategy for latency-sentitive JAVA applications?).

Anyway, talk #2 is here: https://www.youtube.com/watch?v=5Nc68IdNKdg

This is less of a horror IMO, he makes some interesting points about anonymous function syntax and generalizing argument capturing to scopes.

Adbot
ADBOT LOVES YOU

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

shodanjr_gr posted:

With ref-counting you are generally much more cognizant of the lifetime of your objects however so you can have at least some notion of where the destructors are going to trigger and you can (if you care) structure your software around that. Whereas with GC, my understanding is that you have very little control (other than forcing a GC pass manually while keeping track of memory allocations which I believe was a recommended strategy for latency-sentitive JAVA applications?).

Anyway, talk #2 is here: https://www.youtube.com/watch?v=5Nc68IdNKdg

This is less of a horror IMO, he makes some interesting points about anonymous function syntax and generalizing argument capturing to scopes.

Many a horror has been propagated by people who think refcounting is the magical deterministic automatic memory management technique.

Consider NUMA situations (i.e. every modern processor ever built). Sure, you know when a refcount operation will be triggered (lexically) , but you have no idea how long it will take.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
The same is true for pointer deferences, but that's not a reason to throw up your hands and say it's all equally poo poo.

shodanjr_gr
Nov 20, 2007

Malcolm XML posted:

Many a horror has been propagated by people who think refcounting is the magical deterministic automatic memory management technique.

Consider NUMA situations (i.e. every modern processor ever built). Sure, you know when a refcount operation will be triggered (lexically) , but you have no idea how long it will take.

Yet notice how I did not claim that it is magical or deterministic.

When you make software with ref-counting you (should) have SOME understanding of its runtime behavior which allows you to develop with an expectation of where and when large cleanups or dtor-chains are going to occur given a particular code structure. If those things can happen in performance-sensitive paths, you can actually try to do something about it (because you might actually have some idea about how long it will take).

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Jabor posted:

The same is true for pointer deferences, but that's not a reason to throw up your hands and say it's all equally poo poo.

No refcounting has its place but it's used without realizing that it requires a lot of manual bookkeeping to avoid circular references and wild free chains + it adds interprocessor traffic with every ref update unless you do complicated tricks.

In GC the active lifetime of your object is entirely up to you, it's just lazily deallocated in a cleanup pass. One technique I ended up using was to suspend the GC's processing during a critical path and then when I didn't need to care about a pause just let it go hog wild later on.

At it's extreme you get the best form of GC, namely exit() and letting the OS clean up your virtual trash :colbert:

Doctor w-rw-rw-
Jun 24, 2008

Subjunctive posted:

Reference counting is considered garbage collection in all of the literature I've read. There is a difference between decrementing a refcount and calling free(), and it's an important one.
I've seen it referred to as a memory-management strategy, but strictly speaking, reference counting doesn't need to have anything to do with garbage collection.
Reference counting *can* be implemented so that the garbage collector non-deterministically cleans up retain=0 objects. I referred specifically to ARC*, which refers mainly to the Swift/ObjC thing, where if you set a breakpoint at a specific place in the code and step, an object that goes from 1 retains to 0 retains *will* cause deallocation on the spot. I see that less as garbage collection (i.e. scheduled pickup of garbage), and more as automatic memory management. IMO, automatic memory management is not strictly equivalent to garbage collection.

* I suppose I could also have referred to C++ and its pointer types, but I don't really use those so I'm not familiar enough.

Subjunctive posted:

You can definitely get unpredictable pauses from reference counting. "Oh that's the last ref to this big document, let's have a node-freeing party!" Plus you actually touch the things that are garbage, say hi to your working set and cache lines for me.
Unpredictable? Yes, in the sense executing code can be unpredictable. Totally uncontrollable, no. If you can recreate a set of conditions that predictably cause object lifetime problems, with ARC, you can typically reproduce it and use a debugger to watch the thing fail, assuming you don't have any crazy concurrency issues, but those are liable to cause lots of issues beyond only memory management.

With iOS specifically, usually you hold references to stuff you need in a view controller or some controller held by one, so the freeing party often happens after it disappears after being removed from the view hierarchy, which, as it usually follows the end of a user interaction, is an opportune time to free things, since the application being run just did something interactive and probably has time before the next user action.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Reference counting is often called garbage collection in the literature, but IMO it leads to the term GC being pretty much meaningless. If you include manual reference counting then you're pretty much just defining GC as "doesn't involve manual calls to malloc and free", which is an utterly asinine definition, and if you don't then you're left with GC being a poorly named synonym for automatic memory management.

Flobbster
Feb 17, 2005

"Cadet Kirk, after the way you cheated on the Kobayashi Maru test I oughta punch you in tha face!"
Let's solve this with an oversimplified analogy. If the sanitation department pops into your house on demand any time you've finished eating your pudding cup and set it aside, then reference counting is garbage collection. Does anyone live in a city with such service? :v:

Dicky B
Mar 23, 2004

A good GC can only be triggered explicitly or by an allocation request :colbert:

PleasingFungus
Oct 10, 2012
idiot asshole bitch who should fuck off
I ran into this the other day, and thought this thread might appreciate it.

code:
+                        case 4:
+                                ability [abil_c] = 140;
+                                ability_fail [abil_c] = 40 - (you[0].piety / 10) - you[0].skills [38] * 3;
                                 abil_c ++;
                                 break;
code:
+if (grd [you[0].x_pos + xps - 17] [you[0].y_pos + yps - 9] == 65)
+{
+ if (menv [i].m_ench [2] == 6 && mons_flies(menv [i].m_class) == 0 && player_see_invis() == 0)
+ {
+  mpr("There is a strange disturbance in the water here.");
+ }
+}
code:

 if (targetc == 250)
 {
+ loopy:
    do
    {
-      targetc = random2(52); /* I think. Shouldn't poly into eg orc _wizard_ */
-   } while (targetc == 31 || targetc == 25 || targetc == 51); /* no fiends or zombies */
+      targetc = random2(400);
+   } while (mons_rarity(targetc) == 0 || targetc == 99 || targetc == 25 || targetc == 51 || targetc == 367 || targetc == 107 || targetc == 108); /* no shapeshifters or zombies/skeletons/spectr */
+
+ if (grd [menv [monsc].m_x] [menv [monsc].m_y] == 61 || grd [menv [monsc].m_x] [menv [monsc].m_y] == 62)
+   if (mons_flies(targetc) == 0) goto loopy; /* Not fair to instakill a monster like this (actually, I can't be bothered implementing it) */
+ /* Too long to put in the loop thing */
 }
code:
+  int datalen=30+35+10+69+6+5+25+2+30+5+25+12*52+50*5+50*4+50+50+6*50+50+50+30+30+30+100+50+100;
code:
+if ((item_class_inv == -1 && inv_count > 0) || (item_class_inv != -1 && Inv_class2 [item_class_inv] > 0) || (item_class_inv == 1 && (Inv_class2 [0] > 0 || Inv_class2 [1] > 0)) || (item_class_inv == 0 && (Inv_class2 [0] > 0 || Inv_class2 [11] > 0)) || (item_class_inv == 0 && (Inv_class2 [0] > 0 || Inv_class2 [13] > 0)) || (item_class_inv == 6 && (Inv_class2 [6] > 0 || Inv_class2 [10] > 0)))// || (item_class_inv == 3 && (Inv_class2 [3] > 0 || Inv_class2 [11] > 0)))
Etc, etc.

This codebase is still in use and being maintained today, though tragically the very great majority of it has since been rewritten.

1337JiveTurkey
Feb 17, 2005

Roguelikes are fertile ground for finding :psyduck: code.

fritz
Jul 26, 2003

1337JiveTurkey posted:

Roguelikes are fertile ground for finding :psyduck: code.

quote:

rnz produces such a bizarre distribution that it is hard to tell what the original programmer had in mind

Corla Plankun
May 8, 2007

improve the lives of everyone
That rng fills me with such merriment.

Good Will Hrunting
Oct 8, 2012

I changed my mind.
I'm not sorry.
Today I sat down to fix a bug that someone on our product team opened up. The bug was related to a small feature I had added 3 or 4 months ago. When I read the details of the bug I thought to myself "This isn't a bug, this feature was never implemented and it's working as intended". So I spent some time testing it and looking at the code to confirm that it was working as intended. While going to dig up the story I had worked on when it was originally released I stumbled across an active feature request from someone on the product team requesting I do the work that is also currently listed as a bug (and the number one bug priority before our next sprint). Adding the feature is actually substantially more work and will have an impact on our delivery. Are all product people ridiculously unorganized?

Good Will Hrunting fucked around with this message at 00:33 on Oct 7, 2014

ErIog
Jul 11, 2001

:nsacloud:

1337JiveTurkey posted:

Roguelikes are fertile ground for finding :psyduck: code.

Eh, is that really a horror? It's poorly documented, but there's a lot of reasons while making a game that you would want a weird distribution like that. Roguelikes, especially, make a ton of use of the RNG, and the more times you run a more stable RNG the more predictable it becomes due to the even distribution of the output. Sure the player wouldn't be able to predict the value of the next output, but they'd be able to much more easily intuitively discern how the designers have bounded the RNG.

It's more of a simple formula than an actual RNG, but it is a mathematical way of achieving their goal of having some common values return pretty often along with the ability to sometimes return values that are a lot larger. Then also it's bounded, and so there's a reliable maximum.

It strikes me as a pretty simple way to model the thing the designers wanted to model which is the player's interaction with deities and the mystical world. Their intent was probably to make something that feels more random than real random.

From a mathematical perspective that might sound crazy, but there's a big difference between the way humans perceive probability and the way probability actually operates. Sid Meier discussed this a little bit in a 2010 GDC talk. They were receiving negative feedback from users during playtesting sessions because the displayed likelihood of winning a specific battle between two units was the real flat mathematical probability. The displayed probabilities didn't match the way players felt about that probability, and so the game didn't behave in a way they thought it should.

So they put in a system that displays a human-perception-adjusted probability, and they got better feedback.

http://www.third-helix.com/2010/03/13/gdc-2010-sid-meiers-keynote.html

Code doesn't exist in a vacuum. This code probably should be documented better, but it seems like an elegant way to achieve the goal the designers wanted to achieve.

ErIog fucked around with this message at 01:01 on Oct 7, 2014

ShadowHawk
Jun 25, 2000

CERTIFIED PRE OWNED TESLA OWNER

ErIog posted:

Code doesn't exist in a vacuum. This code probably should be documented better, but it seems like an elegant way to achieve the goal the designers wanted to achieve.
Or it could have been written in a vacuum wholly untested with the author not realizing how weird it was, and ever since everyone's been afraid to change it because they don't understand it and nothing seems especially broken.

ephphatha
Dec 18, 2009




Good Will Hrunting posted:

Today I sat down to fix a bug that someone on our product team opened up. The bug was related to a small feature I had added 3 or 4 months ago. When I read the details of the bug I thought to myself "This isn't a bug, this feature was never implemented and it's working as intended". So I spent some time testing it and looking at the code to confirm that it was working as intended. While going to dig up the story I had worked on when it was originally released I stumbled across an active feature request from someone on the product team requesting I do the work that is also currently listed as a bug (and the number one bug priority before our next sprint). Adding the feature is actually substantially more work and will have an impact on our delivery. Are all product people ridiculously unorganized?

"I know feature requests are considered low priority, and bug fixes are high priority. So if I log this feature request as a bug fix maybe it'll get done faster!"

Or at least that's what happens around here. Then again, our tickets come from end users, not people who have a vested interest in doing things the right way.

tef
May 30, 2004

-> some l-system crap ->

Plorkyeran posted:

Reference counting is often called garbage collection in the literature, but IMO it leads to the term GC being pretty much meaningless

GC roughly means "automatic memory management performed at runtime", but doesn't precisely guarantee things beyond that, and that is OK. The words you are looking for are tracing gc. Tracing covers things like mark/sweep, copying collectors et al. The reason it's worth having both rc and tracing as garbage collection, is that is actually how it works in practice for many garbage collection systems. Reference counting and tracing are duals, too.


Further reading: http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf

ErIog
Jul 11, 2001

:nsacloud:

ShadowHawk posted:

Or it could have been written in a vacuum wholly untested with the author not realizing how weird it was, and ever since everyone's been afraid to change it because they don't understand it and nothing seems especially broken.

"Broken" is a concept that only makes sense with regard to some goal in mind. Yeah, in most programming it's clear when something is broken because it's clear what it's trying to do. This RNZ thing is more of a game design issue than a programming issue, though. It's complicated, but "broken" is a wholly subjective judgement. We don't know if its broken because we don't know what the intent was.

Is the jump curve in Super Mario Bros. broken because they're not simulating true gravity? On its face that's a silly question because everyone knows the jump curve in Mario 1 feels pretty good, and that same sort of logic applies here.

Your assertion that it's a black box that nobody messes with because they don't understand it doesn't really make sense. It's documented on that wiki complete with a graph of its distribution in a common case, and shitloads of math talking about what its doing. Contrary to your point, it seems quite well understood.

This is quite a different thing than what we usually talk about in this thread with people doing really stupid and clearly broken things. Most of Nethack really is a coding horror, and roguelikes definitely are fertile ground for them. RNZ is like the worst example to pick, though.

ErIog fucked around with this message at 04:03 on Oct 7, 2014

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
I know what you're getting at here but in this case it really looks like whoever wrote it had something in mind and made as mistake or had a lapse in reasoning somewhere along the way. I doubt that was the distribution they were aiming at.

Corla Plankun
May 8, 2007

improve the lives of everyone
Option one: The man did this exact distribution on purpose because games are special snowflakes that need insane probability distributions because humans could learn to predict traditional distributions.

Option two: The terrible attitude from option one made him blaze his own trail through random number generation and he wound up in a real gross code-bog.

Since your own post about humans not understanding probability counters Option one, I'm thinking it is probably number two. "Number two" is also a euphemism for poop.

PleasingFungus
Oct 10, 2012
idiot asshole bitch who should fuck off

1337JiveTurkey posted:

Roguelikes are fertile ground for finding :psyduck: code.

Nethack, huh?

/* fall through to next case */

PleasingFungus
Oct 10, 2012
idiot asshole bitch who should fuck off
switch/case with default fallthrough is one of the worst-conceived language elements ever implemented. if it did not exist, INTERCAL would have had to invent it.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

PleasingFungus posted:

switch/case with default fallthrough is one of the worst-conceived language elements ever implemented. if it did not exist, INTERCAL would have had to invent it.

Yeah, that was a huge screwup at the time that we're stuck with. At least lint can require that you annotate switch fallthrough nowadays.

One Eye Open
Sep 19, 2006
Am I awake?

PleasingFungus posted:

switch/case with default fallthrough is one of the worst-conceived language elements ever implemented. if it did not exist, INTERCAL would have had to invent it.

If it didn't exist, though, we'd never have Duff's device!

qntm
Jun 17, 2009

One Eye Open posted:

If it didn't exist, though, we'd never have Duff's device!

If switch statements are "ill-conceived", what does being able to jump into the middle of a loop count as?

Nippashish
Nov 2, 2005

Let me see you dance!

One Eye Open posted:

If it didn't exist, though, we'd never have Duff's device!

With non-default fallthrough Duff's device would just be a bit more verbose.

omeg
Sep 3, 2012

https://github.com/coapp-packages/bzip2/blob/master/decompress.c

That macro at the top with a case label in it... Was there no other way to do this? :psyboom:

Qwertycoatl
Dec 31, 2008

omeg posted:

https://github.com/coapp-packages/bzip2/blob/master/decompress.c

That macro at the top with a case label in it... Was there no other way to do this? :psyboom:

It's pretty wtf and could do with a giant comment explaining what's going on, but it's a pretty neat way of doing coroutines.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Qwertycoatl posted:

It's pretty wtf and could do with a giant comment explaining what's going on, but it's a pretty neat way of doing coroutines.

I've written streaming reads in the "correct" way, I.E. basically inside-out, and this is much easier to follow what it's doing.

I see them open-coding a 64bit counter out of two 32bit counters... why? I thought all compilers supported a 64bit type for at least basic operations like increment overflow.

code:
 s->strm->total_in_lo32++; \
 if (s->strm->total_in_lo32 == 0) \
   s->strm->total_in_hi32++; \

b0lt
Apr 29, 2005

Harik posted:

I thought all compilers supported a 64bit type for at least basic operations like increment overflow.

bzip2 is a decade old project that supports all sorts of awful platforms

feedmegin
Jul 30, 2008

b0lt posted:

bzip2 is a decade old project that supports all sorts of awful platforms

Even mainstream compilers have lacked 64-bit arithmetic in the not-super-distant past, which is why things like this exist -

http://msdn.microsoft.com/en-gb/library/windows/desktop/aa383742(v=vs.85).aspx

ctz
Feb 6, 2003

feedmegin posted:

Even mainstream compilers have lacked 64-bit arithmetic in the not-super-distant past, which is why things like this exist -

http://msdn.microsoft.com/en-gb/library/windows/desktop/aa383742(v=vs.85).aspx

That struct was introduced in Windows 95 Service Release 2 (1996). I think that's distant past with respect to computing.

omeg
Sep 3, 2012

ctz posted:

That struct was introduced in Windows 95 Service Release 2 (1996). I think that's distant past with respect to computing.

It's still all over the kernel APIs. :smith:

evensevenone
May 12, 2001
Glass is a solid.

Qwertycoatl posted:

It's pretty wtf and could do with a giant comment explaining what's going on, but it's a pretty neat way of doing coroutines.

Yeah, it doesn't seem that bad to me.

Putting case inside a macro is pretty common in parser / comms code. The fallthroughs are a little tricky, but if you're a C programmer you're used to that.

The variable names are pretty horrid though--"i","lll" "curr" etc, and there seems to be quite a lot of state, but bzip2 is pretty gnarly (wikipedia says there are 9 layers of compression), so maybe it's needed.

MrMoo
Sep 14, 2000

feedmegin posted:

Even mainstream compilers have lacked 64-bit arithmetic in the not-super-distant past, which is why things like this exist -

http://msdn.microsoft.com/en-gb/library/windows/desktop/aa383742(v=vs.85).aspx

That also exists because it is safer when copying a misaligned word on some platforms and faster on others.

hobbesmaster
Jan 28, 2008

feedmegin posted:

Even mainstream compilers have lacked 64-bit arithmetic in the not-super-distant past, which is why things like this exist -

http://msdn.microsoft.com/en-gb/library/windows/desktop/aa383742(v=vs.85).aspx

Its a unix utility, they're probably trying to maintain comparability with all sorts of esoteric stuff.

Necc0
Jun 30, 2005

by exmarx
Broken Cake
I was told to x-post this here:

Necc0 posted:

[ASK] me about when I spent an hour trying to debug a method I wrote only to find out that the imported JDBC library was actually buggy and unreliable-

-because we decided to roll our own JDBC library :shepicide:

canis minor
May 4, 2011

Ignore me, nothing to see here for time being

canis minor fucked around with this message at 14:54 on Oct 9, 2014

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

ctz posted:

That struct was introduced in Windows 95 Service Release 2 (1996). I think that's distant past with respect to computing.

Dude, the company I work for still supports our software on MicroVAXes. Don't underestimate how far legacy support can go. ;)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply