Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

MALE SHOEGAZE posted:

change is my axiom of choice

the axiom of choice is my copilot

Adbot
ADBOT LOVES YOU

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
three obvious possibilities

you might have a static initialization ordering problem, this should be really obvious in the debugger

you might be depending on something getting linked in (a category?) from an unused object in the .a

your dylib is not actually the same code as your .a

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
static archives are a neat technology but unfortunately they solve an almost completely different technical problem from dynamic libraries and so the semantics are really different

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
c-based languages are actually legitimately more difficult to do autocomplete for, c++ especially. tiny differences in inclusion order might mean you're actually finding the NULL from foo.h instead of the NULL from bar.h. and then you include baz.h, and are you actually sure you're going to get the exact same set of declarations? so much for sharing anything between files

and in c++ you write foo( and even the compiler doesn't technically know all the declarations you might be using until it sees the types of the arguments. plus all the template instantiation you have to do. plus the problems of lookup within templates

xcode does all this, fwiw, though i won't deny it has its own problems

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
it does. it also suffers from it; xcode code completion is a lot slower and less stable than some custom thing with less fidelity might be

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
yep

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

bucketmouse posted:

from looking through it while trying to figure out how to port over random functionality the slowdown seems to be because a big chunk of the template functions do something like this for every single variable:

code:

template <typename TT>
TT double_number(const TT &number)
{
TT Number = TT(number);
return Number * 2;
}

e: ^ gangtag owns
e2 : i wonder if the slowdown is a vc compiler thing and other better compilers would optimize Number out due to it being a init-from-const whose value never changes

tl;dr: it's not just a vc compiler thing

c++ has very rigid behavioral semantics: the language gives precise rules for deciding which operator or constructor or whatever any particular code is using, and the implementation has to do exactly that. for example, the compiler can't assume that it's okay to use a move constructor instead of a copy constructor just because it sees that the original object is about to be destroyed, or a copy constructor instead a copy assignment operator just because it sees that you never used the old value in the variable. you wrote something, type-checking says it's implemented by calling such-and -such function, the compiler has to assume that calling some other function might have completely different semantics, no matter how related they might seem

in general this extends to order of execution as well. if two things are sequenced, they have to happen in that order, no matter how unrelated they might seem. the language says that you destroy local variables in the reverse order of their construction as you leave a scope, and it says that the operand of a return statement is evaluated within the scope in which it appears, and that means that those local variables must still exist during that evaluation

there are a few different things that let the compiler optimize despite that:
  • some things aren't sequenced, like different call arguments, and the compiler can do those in whatever order it feels like; sometimes that helps.
  • some things have special language rules which override the general semantics, like copy-elision and NRVO, which allow the compiler to eliminate temporary objects (for different definitions of temporary) in very specific situations, even if it's user-visible. but the situations are very specific and basically syntax-directed
  • "as if" lets the compiler do anything if the program can't (legally) observe the difference, but that's surprisingly difficult for non-POD objects that e.g. allocate memory in their constructors and get passed to lots of interesting code
  • you can always hack knowledge of a type and its operations specially into the compiler (or provide high-level rewrite rules, like GHC does)

in your example, the formal series of operations is: construct a temporary from const lvalue "number", construct "Number" from rvalue temporary, destroy temporary, call operator* with address of "Number" (assuming it takes it by const reference) and consider the result to be another temporary, construct function result from rvalue temporary, destroy temporary, destroy "Number", return. both the temporaries are elidable, but "Number" is not. even if inlining shows that "number" is clearly bound to a temporary, tough luck, you're still constructing "Number"; and you're copy-constructing it, not move-constructing

now, if the matrix happens to be POD — if it stores its elements in a big inline array — the compiler has a chance of being smart about this, because compilers are sometimes smart about memcpys; but if it stores its data on the heap, you are probably screwed. but regardless you are screwed in unoptimized builds, presumably as in your test suite; compilers will do copy-elision and NRVO because they are easy and essentially amount to language guarantees, but they do not do interprocedural analysis and copy propagation because not doing that poo poo is what makes it an unoptimized build

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

JewKiller 3000 posted:

is this more true of c++ than c? my understanding was that a c compiler can reorder anything that doesn't change the program's observable behavior, including some things the programmer might not expect to be reordered. are the rules stricter in c++ because of destructors etc?

like subjunctive said, the implementation can do anything it wants if you're not legally allowed to observe the difference

there's no major difference between c and c++ on this point, but that's kindof my point. c's implicit behavior is basically restricted to fitting scalars to the size of the type you're trying to shove them into. c++ turns innocuous bits of syntax into function calls with arbitrary side-effects, but for the most part, the semantics are exactly as if you'd written those calls out in c: the compiler has to execute exactly those calls in exactly that order, unless it can prove that moving them around or doing something different has no observable effect. so e.g. that's why you sometimes have to tell the compiler what to do with std::move

whereas in contrast, swift says a value is a value, i'm doing whatever i want, if you've got crazy dependencies between values you'd better own up; so if you pass the value of some local variable to a function and then never use that variable again the compiler will just hand that value straight to the function instead of copying it, formal lifetimes be damned. that's why swift has value types but isn't completely dependent on an explicit move operation

and we can be much more aggressive about this kind of thing precisely because we're statically compiled and don't use gc, because if we were making these optimizations dynamically and your code was subtly broken (or ours was) it would be way more likely to only blow up in crazy nondeterministic situations and it would be completely impossible for normal developers to ever figure out what was wrong by direct debugging

that turned more into a rant about swift than about c++ at the end

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
automatic reference counting, essentially like objc but with conventions that are better for being enforced by a compiler. it gets screwed by reference cycles the same as objc (and plenty of other languages). you can even switch to manual reference counting with Unmanaged<T> if you want

"gc" is generally reserved for something that analyzes the actual object reference graph and is capable of deallocating objects even if they're part of a reference cycle. at its core this always involves walking the heap following references, usually asynchronously or at least unpredictably, although most of the last 20+ years of research in gc has been focused on doing fewer complete heap walks or doing them iteratively or at least not needing to shut down the rest of the process while they're happening

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Subjunctive posted:

yeah, it's a great paper.

i agree that it's a cool paper, but i don't really agree with unifying the terms, and i would still argue that a ref-counting model is prone to cause deterministic, immediate finalization in ways that expose mistakes much more reliably than the alternatives, even if the inter-ordering of finalizers is still too complex to reasonably understand

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Notorious b.s.d. posted:

i was always told not to rely on finalizers in java, because they might not run

what is up with your code that you require certain finalization

that is still good advice in swift, which like pretty much every other language doesn't guarantee finalization during program exit. also, as subjunctive says, the exact moment of finalization is still hard to predict when the object's life has been at all complicated

but immediate reclamation is really important to us, not just for performance, but because we're a hybrid-managed environment. in many languages, all memory is managed by the language implementation; at most, you have some class with a c pointer in some private native field whose value is never ever exposed to the language. swift, like objc, and like a few other languages, does not have this property, because by design we sit right on top of c and can directly interoperate with it. if a managed object ever owns an unmanaged pointer, guess what, you have a serious problem, because memory management generally assumes that you can manage allocations separately, and so it will happily reclaim objects that own unmanaged pointers that you're still working with

this was one of the serious problems with objc gc. a lot of objc classes own unmanaged memory, e.g. NSString owning a character buffer, because it's more efficient and because they were written that way twenty years ago. imagine some method on NSString that just pulls the character buffer out of the string object, then starts working with it to, i dunno, count the grapheme clusters or something. on x86-64, self is passed in %rdi. if the compiler spills that to the stack, you don't have a bug. if the compiler loads the buffer pointer into %rax, you don't have a bug. if the compiler loads the buffer pointer into %rdi, and that was the last reference to the object, and there happens to be a gc at exactly the wrong time, guess what, you have a bug. good luck reproducing that

once you find the bug, there's a lot of things you can do; i think c# programmers have a similar problem when they're heavily pinvoking and can use "using" for it maybe? the problem is finding it at all

so having an implementation model that makes finalization more predictable, even if it's not exactly fully predictable, makes it far more likely that these problems show up in development and testing

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
i do not know why i thought this was funy, i am going to bed

rjmccall fucked around with this message at 10:49 on Mar 16, 2015

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
there are still plenty of embedded systems with 16-bit int, i mean you could define it to be 32-bit but then a bunch of standard functions would need to be rewritten to pass things as shorts (intfast_t?) for efficiency

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
it's useful when there's something you need to do, and it might not work, and the right solution is to try again until it does. so e.g. a compare-and-swap loop, or reading until you've filled a buffer

it is also useful in macro hacks because it's an arbitrarily complex single statement that can end in a semicolon

imo it's fine except that it steals the keyword "do" and i want that keyword goddamnit

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
i do not understand this hate for whiteboard interview questions

are people asking whiteboard questions with the expectation that the candidate will write syntactically correct code with proper use of actual existent library functions? because that is dumb as hell. i ask whiteboard questions and assume the candidate is writing pseudocode that looks vaguely like java or c or whatever, and if i see them wasting time thinking about picky stuff i will gently interrupt them and tell them it doesn't matter. what i am looking for is how they think through the problem and talk about their solution and anticipate problems and respond to questions, and secondarily just to check that they have some basic ability to turn ideas into implementation. it is pretty subjective but in my experience it gives you a very clear picture of their technical ability, maybe i have a ton of false negatives but i don't think so

this "let's code review a side project" idea seems really bad. i have no idea who's actually contributed to that side project, maybe the technical design is 90% somebody else and the candidate has just learned to parrot it. and if they re-use this side project across interviews, then guess what, they've basically been coached by previous interviews / code reviews to speak intelligently about that one project. plus you are basically asking to be subjectively swayed by how cool the project seems, without any real sense of how much time the candidate's put into it or whether it was even their original idea. and what do you do if it's a collaborative project, i don't really want to discourage that, but now the existing code tells me nothing. also it is discriminatory in any number of ways because it expects the candidate to have a lot of free time to devote to a side project, which is specifically hard for older and poorer folks who are more likely to have other responsibilities

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Corla Plankun posted:

i don't understand why interviewers don't just give people the benefit of a doubt and let a 30-day probation period sort out the false positives

this job isn't that fuckin hard. if you can fake your way through an interview you'll probably make a fine python janitor

a bad hire is really expensive, both in direct costs (relocation costs, administrative costs of making hire and setting them up, however many months of salary + benefits, possible damage to code) and opportunity costs (lost chances to hire somebody better candidate, lost productivity from better candidate, lost productivity from everyone they interacted with, lower morale of existing team, increased cynicism of existing team about hiring new people). also a month is a really short time; if you hired somebody bad enough to fire after a month of disappointment, you hosed up badly and you need to seriously reconsider you hiring process. if somebody just was a marginal hire and you hired them anyway, it's probably at least 2-3 months before you realize that your optimistic analysis was not correct

the cynicism thing is probably the most important. a team that makes a habit of hiring marginal candidates and then firing them after a month if they don't work out is going to be a team that does not make any effort to invest in its new hires because hey, they'll probably wash out like all the others

also the fraud problem gets worse the lower your expectations are, there really are candidates who just make up everything on their resume

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
because that's the usual thing you want to know: do instances of class A behave like instances of class B

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Symbolic Butt posted:

there's a well established convention (since abelian groups I think) that you usually denote a generic commutative operator by the plus sign. sometimes it's even an implicit thing, you see some plus signs that the author never really cared to define because it's implicit by convention that it's some commutative operation

I mean, I know I'm being a pedantic rear end here with arbitrary mathematical jargon, but this is HOW I FEEEEL after wasting my time studying algebra :qq:

terms and notations in algebra are universal and well-established as long as you carefully only read one textbook. good luck finding a wikipedia page about an algebraic structure that doesn't have a paragraph talking about how different authors use the term for slightly different things

i mean, you're right, + is usually commutative, but appealing to the notational consistency of algebraists of all people is still pretty funny

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
i am pretty sure it does, but probably only in a second-tier mode

once upon a time javac did some simple static optimizations and the jit team bitched that it was making their analysis more difficult or actually inhibiting perfect optimization — which to be fair is easy to imagine for some transformations, like merging local variables — and so they changed javac to be blindingly stupid. that string builder transform is actually dictated by the language spec, otherwise they probably wouldn't do it

i don't know if they've revisited that decision, there is plenty of good research into what early optimizations do and do not inhibit later dynamic optimization

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

gonadic io posted:

also most of them are overfitting to the test data so much down to picking the best seed for their RNG

might as well go ahead and fill out their acceptance letters into the ml phd program then

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
why waste an excellent opportunity to add more data to your training set

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

pram posted:

its a real language developed by talented people

oh i thought it was developed by rob pike

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

pepito sanchez posted:

swift question if anyone here uses it:
networking. am i really recreating the wheel trying to do something like java's remote/unicast? is the language still that much in its infancy that no decent libraries for persistence and communication exist?

you should ask in the grey forums apple dev thread, but the answer is basically that yes, swift has not yet acquired great native answers for these things. if you are interested you may help design them, but in the meantime you should be using the existing apple platform facilities, which iirc are a little dated but not nearly so ridiculously lovely that it's worth reinventing things. otoh i am astonishingly ignorant about certain things which is why you should ask in the grey forums

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
: kramers into thread shouting erratically :

apparently some goons were talking about swift error handling in here, if anyone actually gives a gently caress they can ask about it, gently caress reading previous pages tho

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
but seriously good for you, like most things that poo poo programmers overlook it seems useless until it's suddenly super important why didn't anybody tell me this would happen guys

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
the other nice use case for bisect is that, when your system is architecturally complex, someone screening bugs can just bisect and maybe find an offending commit without needing to be a cross-system technical expert. your cross-system technical experts probably have better things to do than analyze every incoming bug to assign them to components

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

eschaton posted:

also when LMO comes out here for an interview maybe we can have another Bay Area goonmeet

(and you too Bloody, and others)

if this happens before christmas then i will just wait for it

otherwise i will have to invoke the sacred rite of yosmeet while i'm still here

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
you should keep master unstable, run ci on it, and then cut stable release branches which you also run ci on

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
i have heard stories from several companies which do most work on release branches and then merge back to master afterwards, only to immediately rebranch for the next release

surprise surprise it creates a poo poo ton of extra work for everyone (but especially whoever does the merge), makes commit histories completely unreadable, requires trunk to get locked for weeks every few months, and means trunk is basically worthless all the time

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Finster Dexter posted:

The issue I see here is that QA wants to test on a per-feature i.e. per-jira ticket basis. They don't test full releases, necessarily. They start testing a feature when we resolve the jira ticket and move on to other work. So, the problem I'm trying to avoid is where we end up with a branch that has poo poo from last sprint mixed in with poo poo from this sprint, so your timeline looks like:

v1 -- v2 -- v1 -- v2 -- v1

Instead, I want the timeline to be:

v1 -- v1 -- v1 -- v2 -- v2

Where v1 is Sprint v1 and v2 is Sprint v2.

so have qa test feature branches before you merge them back. you can branch from whatever point you feel is stable and doesn't have a ton of other crap you don't want qa to be simultaneously testing. if they discover problems, just fix them on your branch and repeat. you only get this interleaved history if you're repeatedly merging trunk into your feature branch, which is not something that just accidentally happens

i mean, i don't think that's the best development methodology. in my experience, work that takes a long time to complete (more than a week or two) tends to also be invasive and far-reaching. you need to land work like that as soon as you can so everyone can react to it, not keep it bottled up on an isolated branch for months only to spring it on the team during integration. and qa needs to be primarily testing integrated products; if they have enough resources to also test feature branches, great, but that's not a replacement

but if you're going to do this long-lived feature branch thing, it's not at all incompatible with having an unstable master with stable release branches

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Share Bear posted:

- make own branch
- merge branch into master after code review and testing
- eventually make new branch from master, tag as a release branch
- late work gets merged into that branch
- release, remerging into master and making anyone on a newer branch merge conflict accepting theirs

this works pretty well, unless i am using the wrong git-verbs

why merge back from the release branch if the only things there are cherry-picked from master or ad-hoc fixes that you don't want in master? also, code review on the branch is fine, but testing needs to be done primarily on the integrated branches, i.e. master and release

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Shaggar posted:

testing features individually is fine and good, but you also need to test the integration because they will absolutely step on each others toes at some point

agreeing with shaggar

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

pokeyman posted:

my armchair diagnosis is that apple tries very hard for back compat in the specific and is much happier to break in the general. so you'll get a runtime check in framework code that preserves dumb behaviour only when loaded in a certain version of the blizzard installer to keep your warcraft 3 experience intact a decade after release, but kernel extensions completely change in version x and you either adapt or stop caring

also i guess that iOS is a little easier to manage because e.g. third-party kernel extensions aren't really a thing (ok jailbreak but who cares), and i also guess that iOS brokenness ends up higher priority than macOS?

I am fully prepared to be told to eat poo poo by anyone who sits between my armchair and cupertino. this is how I currently reconcile the facts that 1) apple puts a lot of effort into back compat and 2) my poo poo keeps breaking

this is pretty much correct. a lot of the checks aren't necessarily app-specific, but they're tied to existing binaries in a way such that if you rebuild with new tools or a new sdk then the workarounds turn off and you need to fix your poo poo

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Finster Dexter posted:

It prints out "poo poo", I guess.

pretty sure it prints out "poo", then "poo poo"

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Bloody posted:

it's amazing that Mac has backwards compat issues when they completely broke binary compat in what, 2007? like it's been maybe a decade, how can backwards compat issues even be a thing yet

does a library have more than one release? was the source code at all different between those releases? congratulations, it has compatibility issues

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
any reputable cs program will have a systems class in networks. it's just not usually a mandatory class, which ityool 2017 is probably a mistake

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

redleader posted:

destroy your computer, murder your coworkers so they can't stop you

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

VikingofRock posted:

So I have some C++ code that I'd like some feedback on, if someone has time to look at it. It's a class which handles concurrent memoization of the results of a function for various inputs, so that the function is only called once for each set of arguments. Here's what I've got:

your concurrency is correct but really you might as well use a single lock and map. look up the map entry optimistically and if you don't find one call the function and do an insertion. if you actually care about parallelism between keys just store a std::optional and drop the lock temporarily, you'll need to add a condition variable though and there's some risk of priority inversion. but you're literally acquiring up to three locks here per lookup, two in the steady state

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

VikingofRock posted:

Thank you both very much for the feedback. Yeah, I thought my design was a little lock heavy, but my thinking was that each lock is only going to be held for the length of a lookup so it's not too bad. I think rjmccall your design with the std::optional is better though (although I am stuck on C++14 so I'll be using boost::optional). I'll give that a shot tomorrow.

you're basically killing parallelism here by acquiring the lock in the first place. if your function really is far more expensive than acquiring a lock, and you really are likely to have multiple concurrent readers in the early phase when you're still evaluating the function a lot instead of returning previously computed results, then temporarily releasing the lock does re-admit some parallelism during that early phase. on the other hand, if you really do have this much concurrency, you really should be looking at using a concurrent map instead, i.e. something designed to allow look-ups without locking, and then you can use something like call_once to safely concurrently initialize the value

Adbot
ADBOT LOVES YOU

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

meatpotato posted:

I'm the terrible programmer who has managed to avoid writing anything concurrent for the last four years out of fear and ignorance but finally need to learn how to do it, kind of.

Maybe you guys can give me some tips for my situation, I think it might be simple. I'm using C++11.

Right now I have this:

reader_thing is waiting for data most of the time from a hardware peripheral (reader_thing is sleeping on a select() or something underneath). When data is available reader_thing unblocks and is given a pointer to that data. After doing some things, reader_thing loops and waits for data again.

What I want to add is the following:

consumer_thing is a thing that waits around until reader_thing has its pointer to new data. After consumer_thing is unblocked it can also do things with that data from reader_thing (read only). Eventually a consumer_thing loops and blocks again waiting for reader_thing to get new data. Hopefully consumer_thing didn't take too much time doing things and miss some data from reader_thing!

so, this is actually really important to the design. we can assume that it's undesirable for a consumer to miss some data. is it unacceptable? if it is acceptable, does the consumer at least need to be told that it's missed something?

also, how important is it to avoid copies? what about allocating memory?

  • Locked thread