|
Notorious b.s.d. posted:it sounds like the actual problem is that you have hilariously unnatural junctions between projects i can't quote this hard enough
|
# ? Feb 19, 2016 03:13 |
|
|
# ? May 12, 2024 11:56 |
|
we have a bunch of different repos that all get cloned as sibling directories by one master tool chain repo that knows how to build all of them it works pretty ok except that Jenkins doesn't display commit histories because all it does is check out the top level repo where things rarely change. also we have 4 people on my team
|
# ? Feb 19, 2016 03:22 |
|
Lutha Mahtin posted:ive never used jenkins but the name sounds like a cross between a butler and a crusty old maintenance guy who deals with all the gross stuff
|
# ? Feb 19, 2016 04:27 |
|
Notorious b.s.d. posted:defying conway's law yields unhappy results 100% of the time
|
# ? Feb 19, 2016 04:34 |
|
Plorkyeran posted:either you need to reorganize your company or design your software to avoid it. the latter is usually much easier for anything but a small startup what conway teaches us is that you can't design your software to avoid it. can't be done. it's as inevitable as gravity. the software will resemble the org chart no matter what, so the goal is to have as sane an organization as possible if your company's people are well-organized, conway's law will work in your favor if your teams were determined by throwing darts at a board, conway's law will be your daily nemesis
|
# ? Feb 19, 2016 04:46 |
|
JawnV6 posted:my view is shaped by the belief that as software systems grow more and more complex and as the stakes get higher, SW development is going to look a lot like ASIC development. my belief is that as software systems grow more and more complex and as the stakes get higher, SW development is going to look a lot like all the other complex, high stakes things in the world, which is to say cobbled together in an ad-hoc fashion by people who don't understand it and held together by mostly luck and a lot of guesswork.
|
# ? Feb 19, 2016 04:56 |
|
tef posted:also lol git submodules did you just tell me to go gently caress myself git submodules are responsible for so much suffering in the world
|
# ? Feb 19, 2016 04:58 |
|
HappyHippo posted:transpiler banned
|
# ? Feb 19, 2016 05:00 |
|
Shaggar posted:cant you tell Jenkins which projects in the source to build? yeah but you want it to monitor the subtree so it only triggers a build when the subtree is updated FamDav posted:do you have any idea why bazel has been getting a lot of talk over buck when buck is literally a reimplementation of blaze? or is it just because its recently released. i think because people feel like its "the original", even though its not really. i found buck easier to get up and going, but they very obviously share a common origin
|
# ? Feb 19, 2016 05:00 |
|
i mean come on jawn are you honestly looking at consumer software development trends and saying "hmm, yes, we're finally starting to understand how to use rigorous software testing to produce software with virtually zero bugs?"
|
# ? Feb 19, 2016 05:05 |
|
Notorious b.s.d. posted:what conway teaches us is that you can't design your software to avoid it. can't be done. it's as inevitable as gravity. the software will resemble the org chart no matter what, so the goal is to have as sane an organization as possible the "it" to be avoided is the part of the sentence you deleted, not conway's law
|
# ? Feb 19, 2016 05:14 |
|
I tried to read some obj-c today and my eyes slid right off of it, how on earth did apple manage to get people to develop in that lovely language
|
# ? Feb 19, 2016 05:57 |
Is a formatter/auto-linter a cispiler? edit: obfuscators too VikingofRock fucked around with this message at 06:42 on Feb 19, 2016 |
|
# ? Feb 19, 2016 06:17 |
|
uncurable mlady posted:I tried to read some obj-c today and my eyes slid right off of it, how on earth did apple manage to get people to develop in that lovely language it's really good once you get past how weird it looks.
|
# ? Feb 19, 2016 06:24 |
|
rotor posted:i mean come on jawn are you honestly looking at consumer software development trends and saying "hmm, yes, we're finally starting to understand how to use rigorous software testing to produce software with virtually zero bugs?" not in those terms. there's a general slackening of standards as we went from bit twiddling asm, up to keep-chuggin-PHP, down to ASI in js. but that's more like the tide receding than the mountain being flattened, software as a tool getting opened up to a broader population. quick iteration, worse is better, there's room for fast/cheap languages. the "stakes" on a fart app are low. and why is 'consumer' the benchmark of anything? maybe there's better things to build systems, for lack of a better term, do require those standards. the application layer can crash all it wants, something underneath it can't be as freewheeling. javascript hitting rowhammer is a hint that abstraction might break soon. fault tolerance could mean a lot of things. right now there's an engineered circuit that does all it can to present a digital abstraction layer above the actual churn of electrons. why fight it? maybe the control path has to be robust/deterministic but you could have a data path with different properties. still, determinism is the knife that's splitting the world
|
# ? Feb 19, 2016 06:46 |
|
uncurable mlady posted:I tried to read some obj-c today and my eyes slid right off of it, how on earth did apple manage to get people to develop in that lovely language actually I think you'll find that once it clicks you'll want everything you do to follow Smalltalk/ObjC style
|
# ? Feb 19, 2016 06:48 |
|
VikingofRock posted:Is a formatter/auto-linter a cispiler? makes u think...
|
# ? Feb 19, 2016 06:49 |
|
HappyHippo posted:js to elm transpiler! all we need is a "hip" name for it my current project is called HIP
|
# ? Feb 19, 2016 13:13 |
|
uncurable mlady posted:I tried to read some obj-c today and my eyes slid right off of it, how on earth did apple manage to get people to develop in that lovely language no other options
|
# ? Feb 19, 2016 13:40 |
|
In a former life I programmed in Objective C. I can't make heads nor tails of it now.
ErIog fucked around with this message at 15:36 on Feb 19, 2016 |
# ? Feb 19, 2016 14:01 |
|
ErIog posted:In a former life I programmed in Objective C. I can't make heads nor tails of if now. literally the same
|
# ? Feb 19, 2016 15:32 |
|
ErIog posted:In a former life I programmed in Objective C. I can't make heads nor tails of if now. well, you see, "if" is a conditional, which means...
|
# ? Feb 19, 2016 15:33 |
|
prefect posted:well, you see, "if" is a conditional, which means... YOSPOS > Terrible Programmers > You see "if" is a conditional
|
# ? Feb 19, 2016 15:44 |
|
JawnV6 posted:2) pre-compile verification is done on small subsets or at an extremely slow speed (1 Hz), but with determinism and full visibility I'm not sure I get that. My understanding is that running things in a deterministic environment is usually safe and simple. Nondeterminism is what adds problems. If you run the real world stuff in the non-deterministic environment, then isn't the deterministic testing only as good as you are able to reproduce all possible interleavings of nondeterminism in your deterministic runs? I can understand the value of reproducing a nondeterministic bug in a deterministic environment meaning 'the bug is reproducible 100%; fix it now and you know it's fixed in the other case', but it seems like the only place you'd get actual solid value out of it? I mean if you test your poo poo in a deterministic environment scheduled to be Jan 17 1983 and it works all the time, but once you run it on Dec 31 1983 and it changes dates and you don't know what happens, the issue was not covered by your deterministic run, but uncovered in the nondeterministic one, right? Discover had that problem in 2006 where they considered delaying a flight because it would cross the change of year date and become day 366 on a 365 day year and they didn't want that. Or do you assume that the deterministic runs are done in sufficient number and quantity that they cover all the cases? Why not use a formal proof or model-checking then instead?
|
# ? Feb 19, 2016 16:22 |
|
You can use randomized tests to attempt to get good coverage of those edge cases. But even if you are running those randomized tests, you still want to be able to deterministically re-run the exact same scenario that failed in the past. How else are you going to be sure you fixed the issue you uncovered?
|
# ? Feb 19, 2016 16:40 |
|
Jabor posted:You can use randomized tests to attempt to get good coverage of those edge cases. But even if you are running those randomized tests, you still want to be able to deterministically re-run the exact same scenario that failed in the past. How else are you going to be sure you fixed the issue you uncovered? Well, yes, but there's a difference between 'figure out a deterministic way to reproduce a nondeterministic bug' and 'run deterministic tests until all nondeterministic bugs are fixed', and there's a position in between. Roughly, I wouldn't trust a product that has had deterministic tests only if it isn't representative of its production environment, so I'm asking for clarifications.
|
# ? Feb 19, 2016 16:43 |
|
our dev team have been loving up their merges for the last 4 months and merging all changes into a single branch instead of the trunk, then branching from that branch instead of the trunk could be worse because at least the branches themselves are still valid and it just needs a mega merge to trunk but still, wtf.
|
# ? Feb 19, 2016 16:47 |
|
Powerful Two-Hander posted:...merges... ...trunk... That's our svn! :iamafag:
|
# ? Feb 19, 2016 16:54 |
|
Progressive JPEG posted:That's our svn! :iamafag: as soon as i realised what they'd done i knew exactly what had happened, guaranteed they used the merge function in Tortoise and that defaults to merge to working copy with no warnings Tortoise svn is garbage
|
# ? Feb 19, 2016 17:09 |
|
MononcQc posted:I'm not sure I get that. My understanding is that running things in a deterministic environment is usually safe and simple. Nondeterminism is what adds problems. If you run the real world stuff in the non-deterministic environment, then isn't the deterministic testing only as good as you are able to reproduce all possible interleavings of nondeterminism in your deterministic runs? MononcQc posted:I can understand the value of reproducing a nondeterministic bug in a deterministic environment meaning 'the bug is reproducible 100%; fix it now and you know it's fixed in the other case', but it seems like the only place you'd get actual solid value out of it? im saying the absolute best, state of the art in SW debug is a mode that ASIC designers consider pure-terror-groping in the dark, only employed as an absolute last resort with folks still grumbling their doubts. the "only place you'd get actual solid value" is the gold standard any anything less is a duct-tape shim band-aid over a real problem. you're sorta down on the deterministic environment and I think it's some SW blinders. the full-REPL you're accustomed to has that pesky 8 week/millions dollar chunk in the middle now, you don't have the luxury to make arbitrary changes and see them reflected in the real system on a whim. MononcQc posted:Or do you assume that the deterministic runs are done in sufficient number and quantity that they cover all the cases? Why not use a formal proof or model-checking then instead? the other cheat if i'm granted the "sw evolves to asic" conceit is the problem that ASIC teams are hitting now. google up "verification crisis" for the real literature, but the short version is that architect (dream up features) and designer (code features & circuit concern) headcounts are flat and verification/validation is exploding. your ability to check and test is what limits new development, not how clever or dedicated the team is. I was maintaining a set of formal proofs 10 years ago, if "formal" was the magic wand that made all the complexity go away it would've done so already. it's already used a lot around historically tricky areas, like data path on execution units checking a space of 2^64*2^64 is infeasible to check one by one.
|
# ? Feb 19, 2016 18:58 |
|
Best Practices for Architecting Highly Monitorable Applications. Recommendations are literally to not use stored procedures because "they're difficult to monitor". and architecting is not a word
|
# ? Feb 19, 2016 19:20 |
|
Jesus christ scala xml literals are implemented poorly. They seem fine, until suddenly you spend three hours working out why a test is failing on two identical xml nodes:code:
Prints: <Tag>1</Tag> <Tag>1</Tag> <Tag>1</Tag> true false P.s. The explanation to this behaviour (until I knew exactly what to Google for) was in a comment on a stack overflow answer from 2011 gonadic io fucked around with this message at 20:00 on Feb 19, 2016 |
# ? Feb 19, 2016 19:52 |
|
why are you using scala?
|
# ? Feb 19, 2016 20:41 |
|
Shaggar posted:why are you using scala? Just look back to one of the last few times we've had this very discussion in this very thread
|
# ? Feb 19, 2016 21:36 |
|
oh. you seem to have lots of problems with it.
|
# ? Feb 19, 2016 21:36 |
|
gonadic io posted:
so much stuff is like this even for 'major' languages or toolsets i wonder how the gently caress anyone was supposed to get anything done before stackoverflow existed and the untapped mine of autistic eastern european programmers was opened to the world
|
# ? Feb 19, 2016 21:41 |
|
nntp, isn't that exactly where ESR haunted?
|
# ? Feb 19, 2016 21:44 |
|
Shaggar posted:oh. you seem to have lots of problems with it. every day that i don't post "scala is bullshit" in this thread is a day in which i didn't have problems with it. i am switching back to f# for my game though (mostly so i can use unity). but then i'm also planning on doing the number crunching in rust (follow my progress in that thread in the grey forums!) soooo
|
# ? Feb 19, 2016 21:54 |
|
|
# ? Feb 19, 2016 22:05 |
|
|
# ? May 12, 2024 11:56 |
|
yeah this one's a bit ambitious not gonna lie
|
# ? Feb 19, 2016 22:10 |