|
MALE SHOEGAZE posted:change is my axiom of choice the axiom of choice is my copilot
|
# ¿ Jan 14, 2015 07:37 |
|
|
# ¿ May 12, 2024 02:14 |
|
three obvious possibilities you might have a static initialization ordering problem, this should be really obvious in the debugger you might be depending on something getting linked in (a category?) from an unused object in the .a your dylib is not actually the same code as your .a
|
# ¿ Jan 23, 2015 06:43 |
|
static archives are a neat technology but unfortunately they solve an almost completely different technical problem from dynamic libraries and so the semantics are really different
|
# ¿ Jan 23, 2015 06:45 |
|
c-based languages are actually legitimately more difficult to do autocomplete for, c++ especially. tiny differences in inclusion order might mean you're actually finding the NULL from foo.h instead of the NULL from bar.h. and then you include baz.h, and are you actually sure you're going to get the exact same set of declarations? so much for sharing anything between files and in c++ you write foo( and even the compiler doesn't technically know all the declarations you might be using until it sees the types of the arguments. plus all the template instantiation you have to do. plus the problems of lookup within templates xcode does all this, fwiw, though i won't deny it has its own problems
|
# ¿ Feb 15, 2015 21:06 |
|
it does. it also suffers from it; xcode code completion is a lot slower and less stable than some custom thing with less fidelity might be
|
# ¿ Feb 15, 2015 21:23 |
|
yep
|
# ¿ Feb 16, 2015 00:31 |
|
bucketmouse posted:from looking through it while trying to figure out how to port over random functionality the slowdown seems to be because a big chunk of the template functions do something like this for every single variable: tl;dr: it's not just a vc compiler thing c++ has very rigid behavioral semantics: the language gives precise rules for deciding which operator or constructor or whatever any particular code is using, and the implementation has to do exactly that. for example, the compiler can't assume that it's okay to use a move constructor instead of a copy constructor just because it sees that the original object is about to be destroyed, or a copy constructor instead a copy assignment operator just because it sees that you never used the old value in the variable. you wrote something, type-checking says it's implemented by calling such-and -such function, the compiler has to assume that calling some other function might have completely different semantics, no matter how related they might seem in general this extends to order of execution as well. if two things are sequenced, they have to happen in that order, no matter how unrelated they might seem. the language says that you destroy local variables in the reverse order of their construction as you leave a scope, and it says that the operand of a return statement is evaluated within the scope in which it appears, and that means that those local variables must still exist during that evaluation there are a few different things that let the compiler optimize despite that:
in your example, the formal series of operations is: construct a temporary from const lvalue "number", construct "Number" from rvalue temporary, destroy temporary, call operator* with address of "Number" (assuming it takes it by const reference) and consider the result to be another temporary, construct function result from rvalue temporary, destroy temporary, destroy "Number", return. both the temporaries are elidable, but "Number" is not. even if inlining shows that "number" is clearly bound to a temporary, tough luck, you're still constructing "Number"; and you're copy-constructing it, not move-constructing now, if the matrix happens to be POD — if it stores its elements in a big inline array — the compiler has a chance of being smart about this, because compilers are sometimes smart about memcpys; but if it stores its data on the heap, you are probably screwed. but regardless you are screwed in unoptimized builds, presumably as in your test suite; compilers will do copy-elision and NRVO because they are easy and essentially amount to language guarantees, but they do not do interprocedural analysis and copy propagation because not doing that poo poo is what makes it an unoptimized build
|
# ¿ Mar 2, 2015 10:53 |
|
JewKiller 3000 posted:is this more true of c++ than c? my understanding was that a c compiler can reorder anything that doesn't change the program's observable behavior, including some things the programmer might not expect to be reordered. are the rules stricter in c++ because of destructors etc? like subjunctive said, the implementation can do anything it wants if you're not legally allowed to observe the difference there's no major difference between c and c++ on this point, but that's kindof my point. c's implicit behavior is basically restricted to fitting scalars to the size of the type you're trying to shove them into. c++ turns innocuous bits of syntax into function calls with arbitrary side-effects, but for the most part, the semantics are exactly as if you'd written those calls out in c: the compiler has to execute exactly those calls in exactly that order, unless it can prove that moving them around or doing something different has no observable effect. so e.g. that's why you sometimes have to tell the compiler what to do with std::move whereas in contrast, swift says a value is a value, i'm doing whatever i want, if you've got crazy dependencies between values you'd better own up; so if you pass the value of some local variable to a function and then never use that variable again the compiler will just hand that value straight to the function instead of copying it, formal lifetimes be damned. that's why swift has value types but isn't completely dependent on an explicit move operation and we can be much more aggressive about this kind of thing precisely because we're statically compiled and don't use gc, because if we were making these optimizations dynamically and your code was subtly broken (or ours was) it would be way more likely to only blow up in crazy nondeterministic situations and it would be completely impossible for normal developers to ever figure out what was wrong by direct debugging that turned more into a rant about swift than about c++ at the end
|
# ¿ Mar 4, 2015 06:03 |
|
automatic reference counting, essentially like objc but with conventions that are better for being enforced by a compiler. it gets screwed by reference cycles the same as objc (and plenty of other languages). you can even switch to manual reference counting with Unmanaged<T> if you want "gc" is generally reserved for something that analyzes the actual object reference graph and is capable of deallocating objects even if they're part of a reference cycle. at its core this always involves walking the heap following references, usually asynchronously or at least unpredictably, although most of the last 20+ years of research in gc has been focused on doing fewer complete heap walks or doing them iteratively or at least not needing to shut down the rest of the process while they're happening
|
# ¿ Mar 4, 2015 07:00 |
|
Subjunctive posted:yeah, it's a great paper. i agree that it's a cool paper, but i don't really agree with unifying the terms, and i would still argue that a ref-counting model is prone to cause deterministic, immediate finalization in ways that expose mistakes much more reliably than the alternatives, even if the inter-ordering of finalizers is still too complex to reasonably understand
|
# ¿ Mar 4, 2015 09:56 |
|
Notorious b.s.d. posted:i was always told not to rely on finalizers in java, because they might not run that is still good advice in swift, which like pretty much every other language doesn't guarantee finalization during program exit. also, as subjunctive says, the exact moment of finalization is still hard to predict when the object's life has been at all complicated but immediate reclamation is really important to us, not just for performance, but because we're a hybrid-managed environment. in many languages, all memory is managed by the language implementation; at most, you have some class with a c pointer in some private native field whose value is never ever exposed to the language. swift, like objc, and like a few other languages, does not have this property, because by design we sit right on top of c and can directly interoperate with it. if a managed object ever owns an unmanaged pointer, guess what, you have a serious problem, because memory management generally assumes that you can manage allocations separately, and so it will happily reclaim objects that own unmanaged pointers that you're still working with this was one of the serious problems with objc gc. a lot of objc classes own unmanaged memory, e.g. NSString owning a character buffer, because it's more efficient and because they were written that way twenty years ago. imagine some method on NSString that just pulls the character buffer out of the string object, then starts working with it to, i dunno, count the grapheme clusters or something. on x86-64, self is passed in %rdi. if the compiler spills that to the stack, you don't have a bug. if the compiler loads the buffer pointer into %rax, you don't have a bug. if the compiler loads the buffer pointer into %rdi, and that was the last reference to the object, and there happens to be a gc at exactly the wrong time, guess what, you have a bug. good luck reproducing that once you find the bug, there's a lot of things you can do; i think c# programmers have a similar problem when they're heavily pinvoking and can use "using" for it maybe? the problem is finding it at all so having an implementation model that makes finalization more predictable, even if it's not exactly fully predictable, makes it far more likely that these problems show up in development and testing
|
# ¿ Mar 4, 2015 19:42 |
|
i do not know why i thought this was funy, i am going to bed
rjmccall fucked around with this message at 10:49 on Mar 16, 2015 |
# ¿ Mar 16, 2015 10:42 |
|
there are still plenty of embedded systems with 16-bit int, i mean you could define it to be 32-bit but then a bunch of standard functions would need to be rewritten to pass things as shorts (intfast_t?) for efficiency
|
# ¿ Mar 16, 2015 19:43 |
|
it's useful when there's something you need to do, and it might not work, and the right solution is to try again until it does. so e.g. a compare-and-swap loop, or reading until you've filled a buffer it is also useful in macro hacks because it's an arbitrarily complex single statement that can end in a semicolon imo it's fine except that it steals the keyword "do" and i want that keyword goddamnit
|
# ¿ Mar 19, 2015 21:32 |
|
i do not understand this hate for whiteboard interview questions are people asking whiteboard questions with the expectation that the candidate will write syntactically correct code with proper use of actual existent library functions? because that is dumb as hell. i ask whiteboard questions and assume the candidate is writing pseudocode that looks vaguely like java or c or whatever, and if i see them wasting time thinking about picky stuff i will gently interrupt them and tell them it doesn't matter. what i am looking for is how they think through the problem and talk about their solution and anticipate problems and respond to questions, and secondarily just to check that they have some basic ability to turn ideas into implementation. it is pretty subjective but in my experience it gives you a very clear picture of their technical ability, maybe i have a ton of false negatives but i don't think so this "let's code review a side project" idea seems really bad. i have no idea who's actually contributed to that side project, maybe the technical design is 90% somebody else and the candidate has just learned to parrot it. and if they re-use this side project across interviews, then guess what, they've basically been coached by previous interviews / code reviews to speak intelligently about that one project. plus you are basically asking to be subjectively swayed by how cool the project seems, without any real sense of how much time the candidate's put into it or whether it was even their original idea. and what do you do if it's a collaborative project, i don't really want to discourage that, but now the existing code tells me nothing. also it is discriminatory in any number of ways because it expects the candidate to have a lot of free time to devote to a side project, which is specifically hard for older and poorer folks who are more likely to have other responsibilities
|
# ¿ Mar 21, 2015 22:51 |
|
Corla Plankun posted:i don't understand why interviewers don't just give people the benefit of a doubt and let a 30-day probation period sort out the false positives a bad hire is really expensive, both in direct costs (relocation costs, administrative costs of making hire and setting them up, however many months of salary + benefits, possible damage to code) and opportunity costs (lost chances to hire somebody better candidate, lost productivity from better candidate, lost productivity from everyone they interacted with, lower morale of existing team, increased cynicism of existing team about hiring new people). also a month is a really short time; if you hired somebody bad enough to fire after a month of disappointment, you hosed up badly and you need to seriously reconsider you hiring process. if somebody just was a marginal hire and you hired them anyway, it's probably at least 2-3 months before you realize that your optimistic analysis was not correct the cynicism thing is probably the most important. a team that makes a habit of hiring marginal candidates and then firing them after a month if they don't work out is going to be a team that does not make any effort to invest in its new hires because hey, they'll probably wash out like all the others also the fraud problem gets worse the lower your expectations are, there really are candidates who just make up everything on their resume
|
# ¿ Mar 21, 2015 23:35 |
|
because that's the usual thing you want to know: do instances of class A behave like instances of class B
|
# ¿ Mar 30, 2015 20:34 |
|
Symbolic Butt posted:there's a well established convention (since abelian groups I think) that you usually denote a generic commutative operator by the plus sign. sometimes it's even an implicit thing, you see some plus signs that the author never really cared to define because it's implicit by convention that it's some commutative operation terms and notations in algebra are universal and well-established as long as you carefully only read one textbook. good luck finding a wikipedia page about an algebraic structure that doesn't have a paragraph talking about how different authors use the term for slightly different things i mean, you're right, + is usually commutative, but appealing to the notational consistency of algebraists of all people is still pretty funny
|
# ¿ Apr 5, 2015 21:38 |
|
i am pretty sure it does, but probably only in a second-tier mode once upon a time javac did some simple static optimizations and the jit team bitched that it was making their analysis more difficult or actually inhibiting perfect optimization — which to be fair is easy to imagine for some transformations, like merging local variables — and so they changed javac to be blindingly stupid. that string builder transform is actually dictated by the language spec, otherwise they probably wouldn't do it i don't know if they've revisited that decision, there is plenty of good research into what early optimizations do and do not inhibit later dynamic optimization
|
# ¿ Apr 6, 2015 20:58 |
|
gonadic io posted:also most of them are overfitting to the test data so much down to picking the best seed for their RNG might as well go ahead and fill out their acceptance letters into the ml phd program then
|
# ¿ May 15, 2015 01:01 |
|
why waste an excellent opportunity to add more data to your training set
|
# ¿ May 15, 2015 02:39 |
|
pram posted:its a real language developed by talented people oh i thought it was developed by rob pike
|
# ¿ May 23, 2015 02:51 |
|
pepito sanchez posted:swift question if anyone here uses it: you should ask in the grey forums apple dev thread, but the answer is basically that yes, swift has not yet acquired great native answers for these things. if you are interested you may help design them, but in the meantime you should be using the existing apple platform facilities, which iirc are a little dated but not nearly so ridiculously lovely that it's worth reinventing things. otoh i am astonishingly ignorant about certain things which is why you should ask in the grey forums
|
# ¿ Dec 16, 2015 07:46 |
|
: kramers into thread shouting erratically : apparently some goons were talking about swift error handling in here, if anyone actually gives a gently caress they can ask about it, gently caress reading previous pages tho
|
# ¿ Jun 2, 2016 22:56 |
|
but seriously good for you, like most things that poo poo programmers overlook it seems useless until it's suddenly super important why didn't anybody tell me this would happen guys
|
# ¿ Jun 4, 2016 23:07 |
|
the other nice use case for bisect is that, when your system is architecturally complex, someone screening bugs can just bisect and maybe find an offending commit without needing to be a cross-system technical expert. your cross-system technical experts probably have better things to do than analyze every incoming bug to assign them to components
|
# ¿ Jun 6, 2016 16:45 |
|
eschaton posted:also when LMO comes out here for an interview maybe we can have another Bay Area goonmeet if this happens before christmas then i will just wait for it otherwise i will have to invoke the sacred rite of yosmeet while i'm still here
|
# ¿ Oct 21, 2016 06:22 |
|
you should keep master unstable, run ci on it, and then cut stable release branches which you also run ci on
|
# ¿ Oct 21, 2016 17:52 |
|
i have heard stories from several companies which do most work on release branches and then merge back to master afterwards, only to immediately rebranch for the next release surprise surprise it creates a poo poo ton of extra work for everyone (but especially whoever does the merge), makes commit histories completely unreadable, requires trunk to get locked for weeks every few months, and means trunk is basically worthless all the time
|
# ¿ Oct 21, 2016 21:09 |
|
Finster Dexter posted:The issue I see here is that QA wants to test on a per-feature i.e. per-jira ticket basis. They don't test full releases, necessarily. They start testing a feature when we resolve the jira ticket and move on to other work. So, the problem I'm trying to avoid is where we end up with a branch that has poo poo from last sprint mixed in with poo poo from this sprint, so your timeline looks like: so have qa test feature branches before you merge them back. you can branch from whatever point you feel is stable and doesn't have a ton of other crap you don't want qa to be simultaneously testing. if they discover problems, just fix them on your branch and repeat. you only get this interleaved history if you're repeatedly merging trunk into your feature branch, which is not something that just accidentally happens i mean, i don't think that's the best development methodology. in my experience, work that takes a long time to complete (more than a week or two) tends to also be invasive and far-reaching. you need to land work like that as soon as you can so everyone can react to it, not keep it bottled up on an isolated branch for months only to spring it on the team during integration. and qa needs to be primarily testing integrated products; if they have enough resources to also test feature branches, great, but that's not a replacement but if you're going to do this long-lived feature branch thing, it's not at all incompatible with having an unstable master with stable release branches
|
# ¿ Oct 21, 2016 21:34 |
|
Share Bear posted:- make own branch why merge back from the release branch if the only things there are cherry-picked from master or ad-hoc fixes that you don't want in master? also, code review on the branch is fine, but testing needs to be done primarily on the integrated branches, i.e. master and release
|
# ¿ Oct 21, 2016 22:10 |
|
Shaggar posted:testing features individually is fine and good, but you also need to test the integration because they will absolutely step on each others toes at some point agreeing with shaggar
|
# ¿ Oct 21, 2016 22:11 |
|
pokeyman posted:my armchair diagnosis is that apple tries very hard for back compat in the specific and is much happier to break in the general. so you'll get a runtime check in framework code that preserves dumb behaviour only when loaded in a certain version of the blizzard installer to keep your warcraft 3 experience intact a decade after release, but kernel extensions completely change in version x and you either adapt or stop caring this is pretty much correct. a lot of the checks aren't necessarily app-specific, but they're tied to existing binaries in a way such that if you rebuild with new tools or a new sdk then the workarounds turn off and you need to fix your poo poo
|
# ¿ Oct 25, 2016 03:18 |
|
Finster Dexter posted:It prints out "poo poo", I guess. pretty sure it prints out "poo", then "poo poo"
|
# ¿ Oct 25, 2016 21:57 |
|
Bloody posted:it's amazing that Mac has backwards compat issues when they completely broke binary compat in what, 2007? like it's been maybe a decade, how can backwards compat issues even be a thing yet does a library have more than one release? was the source code at all different between those releases? congratulations, it has compatibility issues
|
# ¿ Oct 26, 2016 16:39 |
|
any reputable cs program will have a systems class in networks. it's just not usually a mandatory class, which ityool 2017 is probably a mistake
|
# ¿ Apr 20, 2017 18:08 |
|
redleader posted:destroy your computer, murder your coworkers so they can't stop you
|
# ¿ May 23, 2017 05:06 |
|
VikingofRock posted:So I have some C++ code that I'd like some feedback on, if someone has time to look at it. It's a class which handles concurrent memoization of the results of a function for various inputs, so that the function is only called once for each set of arguments. Here's what I've got: your concurrency is correct but really you might as well use a single lock and map. look up the map entry optimistically and if you don't find one call the function and do an insertion. if you actually care about parallelism between keys just store a std::optional and drop the lock temporarily, you'll need to add a condition variable though and there's some risk of priority inversion. but you're literally acquiring up to three locks here per lookup, two in the steady state
|
# ¿ Aug 7, 2017 02:08 |
|
VikingofRock posted:Thank you both very much for the feedback. Yeah, I thought my design was a little lock heavy, but my thinking was that each lock is only going to be held for the length of a lookup so it's not too bad. I think rjmccall your design with the std::optional is better though (although I am stuck on C++14 so I'll be using boost::optional). I'll give that a shot tomorrow. you're basically killing parallelism here by acquiring the lock in the first place. if your function really is far more expensive than acquiring a lock, and you really are likely to have multiple concurrent readers in the early phase when you're still evaluating the function a lot instead of returning previously computed results, then temporarily releasing the lock does re-admit some parallelism during that early phase. on the other hand, if you really do have this much concurrency, you really should be looking at using a concurrent map instead, i.e. something designed to allow look-ups without locking, and then you can use something like call_once to safely concurrently initialize the value
|
# ¿ Aug 7, 2017 06:22 |
|
|
# ¿ May 12, 2024 02:14 |
|
meatpotato posted:I'm the terrible programmer who has managed to avoid writing anything concurrent for the last four years out of fear and ignorance but finally need to learn how to do it, kind of. so, this is actually really important to the design. we can assume that it's undesirable for a consumer to miss some data. is it unacceptable? if it is acceptable, does the consumer at least need to be told that it's missed something? also, how important is it to avoid copies? what about allocating memory?
|
# ¿ Aug 7, 2017 06:36 |