|
pointsofdata posted:That yellow arrow in visual studio which lets you change what code is going to execute next is great*. I love stepping stone ver functions until something wrong looking happens then going back and stepping in lotta debuggers have change next instruction, its very handy hackbunny posted:my favorite visual studio debugging feature is edit-and-continue, it's this nearly magic collaboration between linker and debugger that lets you change the code being executed without relaunching the process its awesome but doesnt work in x64 builds iirc ~Coxy posted:my favourite is being able to use lambdas in the Immediate window probably can confession: im a clown who doesnt use conditional breakpoints
|
# ? Feb 17, 2016 14:22 |
|
|
# ? May 27, 2024 01:39 |
|
Bloody posted:its awesome but doesnt work in x64 builds iirc it does since either 2013 or 2015
|
# ? Feb 17, 2016 14:24 |
|
oh neat im gonna have to start using that again
|
# ? Feb 17, 2016 14:25 |
|
tef posted:i use print statements in my own code, and because of the work i do, they often get transformed into logging statements what is a log statement, other than a fancy print()?? i often start out logging some cruddy broken operation at verbose level, then once it isn't crapping its pants every run i dial it down, or even remove it! makes me feel smart and productive
|
# ? Feb 17, 2016 14:33 |
|
i leave the log statements there but set them to different levels of verbosity then adjust the runtime verbosity with a config file then leave it set to the most verbose level because w/e lol
|
# ? Feb 17, 2016 14:51 |
|
tef posted:hi, use logging anywhere you would use a print statement. logger.Debug("whatever {0}",boner) is just as easy as Console.Trace("whatever {0}",boner)
|
# ? Feb 17, 2016 15:42 |
|
hackbunny posted:my favorite visual studio debugging feature is edit-and-continue, it's this nearly magic collaboration between linker and debugger that lets you change the code being executed without relaunching the process this has never worked for me ever. and it was a promised feature of vs2015 for asp.net but it doesn't work if you're using the debugger which makes it totally pointless.
|
# ? Feb 17, 2016 15:44 |
|
ugh at my current gig we have a monorepo because the overhead of maintaining a bunch of different repos was annoying, but now we have a new problem where our CI system is super naive and just runs all the tests any time someone pushes this seems like it should be an easily solvable problem but it turns out basically all CI systems are terrible and make this really hard to do without a bunch of custom infrastructure in retrospect maybe having to merge 4 different PRs for every feature wasn't so bad...
|
# ? Feb 17, 2016 15:44 |
|
abraham linksys posted:ugh at my current gig we have a monorepo because the overhead of maintaining a bunch of different repos was annoying, but now we have a new problem where our CI system is super naive and just runs all the tests any time someone pushes the easy fix is to cut it down to some smoke tests that you run on every build to catch the blatant poo poo, and run the full test suite hourly or however often your test machines can churn through all those tests. if any of the tests fail, then it can binary search through the intervening changes to find out what broke it. all the benefits of running all the tests at every change, at a tiny fraction of the resource cost.
|
# ? Feb 17, 2016 16:14 |
|
Jabor posted:the easy fix is to cut it down to some smoke tests that you run on every build to catch the blatant poo poo, and run the full test suite hourly or however often your test machines can churn through all those tests. if any of the tests fail, then it can binary search through the intervening changes to find out what broke it. all the benefits of running all the tests at every change, at a tiny fraction of the resource cost. yeah i've worked at companies with just straight-up manually triggered builds and it's not the end of the world but i feel like i'm gonna get poo poo on if i even suggest that my coworkers are weirdly perfectionist and trying to strive for an idealized infrastructure we would need 3x the manpower to actually implement so instead nothing gets done and we pile 10x engineered hacks on top of each other
|
# ? Feb 17, 2016 16:25 |
|
FamDav posted:so its the actual build tools that guarantee a particular build is bit-identical whereas its the build systems job to make sure configuration, artifacts, and initial state are identical from build to build. If you use their build rules correctly they will indeed produce reproducible builds (for Java/c++ at least).
|
# ? Feb 17, 2016 16:54 |
|
btw we're currently using gitlab ci which is a loving dumpster fire it used to have a CI_BUILD_BEFORE_SHA env var that you could use in your CI build script so that you could look up the commit range but they silently broke it a few major versions ago and their current roadmap for this is "¯\_(ツ)_/¯" https://gitlab.com/gitlab-org/gitlab-ce/issues/3210 we were gonna use that to build a little partial rebuild script but now we can't so... idk maybe we'll switch to jenkins which might be a configuration nightmare but at least it probably works the only other solution I can think of here is to do what bazel apparently recommends for CI services that don't maintain state which is to just diff the current ref against origin/master and use that to build your "changeset." you still run way more tests than you "need to" but if your feature branch only touches one "service" you can at least not involve a bunch of unnecessary tests for other services
|
# ? Feb 17, 2016 16:56 |
|
how long is a full rebuild and test taking
|
# ? Feb 17, 2016 17:07 |
|
extremely "it depends." we have 16 jobs that get triggered on push split across 8 runners. some of these jobs are frontend tests that just run `npm install && npm test`, but most of them are backend tests that run inside docker containers (so they rebuild containers, with a cache of course, but if any dependency in a requirements.txt or whatever changes it has to reinstall all of them because ~docker~). there's one job that's supposed to build everything and push containers to a docker registry and that one takes a looong time, and an integration one that takes a loooong time... but i mean really the answer is "they take long enough that we end up with a queue with the last half of hour pushes worth of builds in it" which is silly when most of these pushes are like "change one test in one service"
|
# ? Feb 17, 2016 17:14 |
|
another step up from print/log debugging is assert, if applicable. also generally a nice thing you can leave in the code afterward. asserts are a great way to turn "stack trace of a system in an already failed state" to "stack trace of the system trying to enter a failed state" which is exactly what you want.
|
# ? Feb 17, 2016 17:23 |
|
i reckon the real answer to all this is "find a new job where i can just write my goddamn javascript in peace instead of spending 50% of my day fixing esoteric issues in our tooling and infrastructure" right now i'm currently figuring out how to repackage scipy as a debian package because it takes 80000 hours to `pip install` and our docker build cache for the container that uses it keeps busting for some reason and i'm having a real moment of "how the gently caress did I end up here"
|
# ? Feb 17, 2016 17:24 |
|
TwoDice posted:If you use their build rules correctly they will indeed produce reproducible builds (for Java/c++ at least). it's like the ocean spanning fiber vs the last mile of cable to your home. the ocean spanning fiber is huge and makes what you're trying to do possible, but the last mile is what actually connects everything together and is also the most ad-hoc.
|
# ? Feb 17, 2016 18:09 |
|
abraham linksys posted:i reckon the real answer to all this is "find a new job where i can just write my goddamn javascript in peace instead of spending 50% of my day fixing esoteric issues in our tooling and infrastructure" does that exist? i thought half the point of js was ignoring all tooling precedent
|
# ? Feb 17, 2016 18:10 |
|
i mean i spend a lot of time configuring js build tools but fixing a webpack config or whatever is a simple task for one person and not, y'know, a massive devops undertaking
|
# ? Feb 17, 2016 18:15 |
|
FamDav posted:it's like the ocean spanning fiber vs the last mile of cable to your home. the ocean spanning fiber is huge and makes what you're trying to do possible, but the last mile is what actually connects everything together and is also the most ad-hoc. yeah if your build rules/compilers stick timestamps and random numbers into your builds you're gonna have a bad time. (I'm looking at you __TIMESTAMP__)
|
# ? Feb 17, 2016 18:17 |
|
also if you read the Bazel mailing list where they're working through remote cache and distributed build they are very much concerned about reproducibility because they can't guarantee it if your build environment is different (or even different Dev machines). I know I'm sperging about this but seriously it's like claiming stronger consistency than your system actually has. don't do it.
|
# ? Feb 17, 2016 18:21 |
|
FamDav posted:also if you read the Bazel mailing list where they're working through remote cache and distributed build they are very much concerned about reproducibility because they can't guarantee it if your build environment is different (or even different Dev machines). yeah I don't know how much of the internal systems' reproducibility made it into bazel
|
# ? Feb 17, 2016 18:31 |
|
TwoDice posted:yeah I don't know how much of the internal systems' reproducibility made it into bazel i dont think the "everythings builds against the same environment" bit made it in, though they do support (optionally) versioning your compiler toolchain which is good. my wish for us is that we'd modernize and release a version of our distributed build system because i see the open sourcing of bazel as a step towards Google creating a cloud service around it. we've been doing this kind of thing since like 2001 :/.
|
# ? Feb 17, 2016 18:42 |
|
https://github.com/miloyip/nativejson-benchmark
|
# ? Feb 17, 2016 18:48 |
|
Shaggar posted:use logging anywhere you would use a print statement. logger.Debug("whatever {0}",boner) is just as easy as Console.Trace("whatever {0}",boner) ive finally managed to shout most of my co-workers into using our internal logging package (based on spdlog) instead of just using cout and / or printf everywhere
|
# ? Feb 17, 2016 18:50 |
|
At London tech startup fair, will report back. E:scratch that, turns out they hadn't been entirely transparent about ticketing. Looked pretty low budget anywya. distortion park fucked around with this message at 19:33 on Feb 17, 2016 |
# ? Feb 17, 2016 19:28 |
|
lol it's at a boys school
|
# ? Feb 17, 2016 19:41 |
|
abraham linksys posted:but i mean really the answer is "they take long enough that we end up with a queue with the last half of hour pushes worth of builds in it" which is silly when most of these pushes are like "change one test in one service" the answer should be to concatenate changes when one docker build-push-test cycle finishes, take all commits since then and start another docker build. if the majority are minor issues that don't affect one another, you get to check off that the whole batch works and only run each commit individually when one fails and you binary search the list for the offender
|
# ? Feb 17, 2016 19:41 |
|
first presentation is from sage lol
|
# ? Feb 17, 2016 19:42 |
|
there's eight employers listed if you want to build a mobile app / web app hosted in the cloud https://www.siliconmilkroundabout.com/ is cheesy but at least people go to it
|
# ? Feb 17, 2016 19:44 |
|
tef posted:lol it's at a boys school it's super low budget. Adobe reader just ran out of memory 5 min in
|
# ? Feb 17, 2016 19:45 |
|
tef posted:there's eight employers listed that's not on tonight though! I regret this already though, it's real bad
|
# ? Feb 17, 2016 19:47 |
|
pointsofdata posted:it's super low budget. Adobe reader just ran out of memory 5 min in what twilight zone are youin
|
# ? Feb 17, 2016 19:55 |
|
JawnV6 posted:the answer should be to concatenate changes this + pipeline. if you're attempting to ex. run integration tests in parallel with your build and unit tests, you should think about only running them (on a batch of changes) after you've verified build and test.
|
# ? Feb 17, 2016 19:57 |
|
FamDav posted:what twilight zone are youin 5th tier startup zone
|
# ? Feb 17, 2016 19:59 |
|
FamDav posted:this + pipeline. if you're attempting to ex. run integration tests in parallel with your build and unit tests, you should think about only running them (on a batch of changes) after you've verified build and test. we had a check at build, a check for a DOA test suite, then the general regression assuming those passed, each step farming out to clusters of 10k. this was a decade ago on ASIC's but it's fun to hear SW folks hit similar solutions and the js folks thinking it's greenfield lol
|
# ? Feb 17, 2016 20:05 |
|
it's a shame there are so few C/C++ json parsers
|
# ? Feb 17, 2016 20:15 |
|
after a little playing with elm im starting to get the hang of things. one thing is really bothering me, however: the debugger does not work with StartApp.start, only StartApp.Simple.start. the randomGif sample in the arch repo shows the issue https://github.com/evancz/elm-architecture-tutorial/tree/master/examples/5 (debug mode fails with some forEach error) my app will have a whole bunch of effects, and this makes it impossible to debug poo poo. how the hell can i debug an app with effects?
|
# ? Feb 17, 2016 20:36 |
|
MononcQc posted:(I haven't read the book) that's because determinism is a thing you associate with predictability and lack of bugs, which are proxies to, but not sufficient to be synonymous with quality, no? my view is shaped by the belief that as software systems grow more and more complex and as the stakes get higher, SW development is going to look a lot like ASIC development. NASA occupies another point on the continuum of dev approaches that's closer to ASIC. asic development is sw development with these changes: 1) compilation takes 8 weeks and millions of dollars 2) pre-compile verification is done on small subsets or at an extremely slow speed (1 Hz), but with determinism and full visibility 3) post-compilation verification runs at full speed (4Ghz), but without determinism and with reduced visibility other changes grow from those points, like pre-si testing is often a white box approach because you can't afford to NOT run things that target new features, post-si testing is often black box because they're running real applications and can't afford to learn about every minor feature that might affect their testing. but i kinda came around to the view that determinism is a more defining issue than visibility or box color. when you have an issue reproduced in a deterministic environment, coming to the statement that something is "fixed" is inherently more rigorous. most debug efforts in post-si amount to trying to reproduce it in the pre-si environment. but it's never guaranteed to terminate. you might have a bug that clearly manifests on real systems that doesn't ever show up across years of direct & directed random testing. it's real, and must be fixed. on the 100 test systems in the lab, if all are configured and tasked with hunting this bug, you're lucky to hit it once a week. if there's a defeature option that's relevant to the failing scenario, toggling it doesn't say much. maybe it fixed it, maybe it made it twice as hard to hit so 1 failure happens only every other week. i kinda think that extends to the real software dev. everything's deterministic in theory or could be made such at the cost of performance, but when bugs crop up at full scale/speed the option of "only do 1 transaction at a time" isn't viable and finer-grained solutions are a necessity, often falling into that same valley of "we don't understand the failure mechanism, but need to push some fix out soon" so I don't associate determinism with a "lack of bugs," it's closer to "confidence in fixes" but im pretty sure im wholly wrong about where we're going and huge teams delivering rigorous products is going to give way to smaller teams delivering fault-tolerant products
|
# ? Feb 17, 2016 20:56 |
|
|
# ? May 27, 2024 01:39 |
|
JawnV6 posted:my view is shaped by the belief that as software systems grow more and more complex and as the stakes get higher, SW development is going to look a lot like ASIC development. NASA occupies another point on the continuum of dev approaches that's closer to ASIC. this is a good and interesting post
|
# ? Feb 17, 2016 21:06 |