Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Bloody
Mar 3, 2013

pointsofdata posted:

That yellow arrow in visual studio which lets you change what code is going to execute next is great*. I love stepping stone ver functions until something wrong looking happens then going back and stepping in



*if your application isn't full of ugly state

lotta debuggers have change next instruction, its very handy


hackbunny posted:

my favorite visual studio debugging feature is edit-and-continue, it's this nearly magic collaboration between linker and debugger that lets you change the code being executed without relaunching the process

its awesome but doesnt work in x64 builds iirc

~Coxy posted:

my favourite is being able to use lambdas in the Immediate window

now I wonder whether you can use one in a conditional breakpoint

probably can

confession: im a clown who doesnt use conditional breakpoints

Adbot
ADBOT LOVES YOU

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD

Bloody posted:

its awesome but doesnt work in x64 builds iirc

it does since either 2013 or 2015

Bloody
Mar 3, 2013

oh neat im gonna have to start using that again

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

tef posted:

i use print statements in my own code, and because of the work i do, they often get transformed into logging statements

what is a log statement, other than a fancy print()??

i often start out logging some cruddy broken operation at verbose level, then once it isn't crapping its pants every run i dial it down, or even remove it! makes me feel smart and productive

Bloody
Mar 3, 2013

i leave the log statements there but set them to different levels of verbosity then adjust the runtime verbosity with a config file

then leave it set to the most verbose level because w/e lol

Shaggar
Apr 26, 2006

tef posted:

hi,

print statements are great, tracing is neat, debuggers are useful

i use print statements in my own code, and because of the work i do, they often get transformed into logging statements

debuggers are great for going through other people's code, but i haven't used them much outside of reverse engineering tbh.

but uh, logging is great and the best thing ever

use logging anywhere you would use a print statement. logger.Debug("whatever {0}",boner) is just as easy as Console.Trace("whatever {0}",boner)

Shaggar
Apr 26, 2006

hackbunny posted:

my favorite visual studio debugging feature is edit-and-continue, it's this nearly magic collaboration between linker and debugger that lets you change the code being executed without relaunching the process

this has never worked for me ever. and it was a promised feature of vs2015 for asp.net but it doesn't work if you're using the debugger which makes it totally pointless.

abraham linksys
Sep 6, 2010

:darksouls:
ugh at my current gig we have a monorepo because the overhead of maintaining a bunch of different repos was annoying, but now we have a new problem where our CI system is super naive and just runs all the tests any time someone pushes

this seems like it should be an easily solvable problem but it turns out basically all CI systems are terrible and make this really hard to do without a bunch of custom infrastructure

in retrospect maybe having to merge 4 different PRs for every feature wasn't so bad...

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

abraham linksys posted:

ugh at my current gig we have a monorepo because the overhead of maintaining a bunch of different repos was annoying, but now we have a new problem where our CI system is super naive and just runs all the tests any time someone pushes

this seems like it should be an easily solvable problem but it turns out basically all CI systems are terrible and make this really hard to do without a bunch of custom infrastructure

in retrospect maybe having to merge 4 different PRs for every feature wasn't so bad...

the easy fix is to cut it down to some smoke tests that you run on every build to catch the blatant poo poo, and run the full test suite hourly or however often your test machines can churn through all those tests. if any of the tests fail, then it can binary search through the intervening changes to find out what broke it. all the benefits of running all the tests at every change, at a tiny fraction of the resource cost.

abraham linksys
Sep 6, 2010

:darksouls:

Jabor posted:

the easy fix is to cut it down to some smoke tests that you run on every build to catch the blatant poo poo, and run the full test suite hourly or however often your test machines can churn through all those tests. if any of the tests fail, then it can binary search through the intervening changes to find out what broke it. all the benefits of running all the tests at every change, at a tiny fraction of the resource cost.

yeah i've worked at companies with just straight-up manually triggered builds and it's not the end of the world but i feel like i'm gonna get poo poo on if i even suggest that

my coworkers are weirdly perfectionist and trying to strive for an idealized infrastructure we would need 3x the manpower to actually implement so instead nothing gets done and we pile 10x engineered hacks on top of each other

TwoDice
Feb 11, 2005
Not one, two.
Grimey Drawer

FamDav posted:

so its the actual build tools that guarantee a particular build is bit-identical whereas its the build systems job to make sure configuration, artifacts, and initial state are identical from build to build.

If you use their build rules correctly they will indeed produce reproducible builds (for Java/c++ at least).

abraham linksys
Sep 6, 2010

:darksouls:
btw we're currently using gitlab ci which is a loving dumpster fire

it used to have a CI_BUILD_BEFORE_SHA env var that you could use in your CI build script so that you could look up the commit range but they silently broke it a few major versions ago and their current roadmap for this is "¯\_(ツ)_/¯" https://gitlab.com/gitlab-org/gitlab-ce/issues/3210

we were gonna use that to build a little partial rebuild script but now we can't so... idk maybe we'll switch to jenkins which might be a configuration nightmare but at least it probably works

the only other solution I can think of here is to do what bazel apparently recommends for CI services that don't maintain state which is to just diff the current ref against origin/master and use that to build your "changeset." you still run way more tests than you "need to" but if your feature branch only touches one "service" you can at least not involve a bunch of unnecessary tests for other services

FamDav
Mar 29, 2008
how long is a full rebuild and test taking

abraham linksys
Sep 6, 2010

:darksouls:
extremely "it depends." we have 16 jobs that get triggered on push split across 8 runners. some of these jobs are frontend tests that just run `npm install && npm test`, but most of them are backend tests that run inside docker containers (so they rebuild containers, with a cache of course, but if any dependency in a requirements.txt or whatever changes it has to reinstall all of them because ~docker~). there's one job that's supposed to build everything and push containers to a docker registry and that one takes a looong time, and an integration one that takes a loooong time...

but i mean really the answer is "they take long enough that we end up with a queue with the last half of hour pushes worth of builds in it" which is silly when most of these pushes are like "change one test in one service"

crazypenguin
Mar 9, 2005
nothing witty here, move along
another step up from print/log debugging is assert, if applicable.

also generally a nice thing you can leave in the code afterward.

asserts are a great way to turn "stack trace of a system in an already failed state" to "stack trace of the system trying to enter a failed state" which is exactly what you want.

abraham linksys
Sep 6, 2010

:darksouls:
i reckon the real answer to all this is "find a new job where i can just write my goddamn javascript in peace instead of spending 50% of my day fixing esoteric issues in our tooling and infrastructure"

right now i'm currently figuring out how to repackage scipy as a debian package because it takes 80000 hours to `pip install` and our docker build cache for the container that uses it keeps busting for some reason and i'm having a real moment of "how the gently caress did I end up here"

FamDav
Mar 29, 2008


TwoDice posted:

If you use their build rules correctly they will indeed produce reproducible builds (for Java/c++ at least).

it's like the ocean spanning fiber vs the last mile of cable to your home. the ocean spanning fiber is huge and makes what you're trying to do possible, but the last mile is what actually connects everything together and is also the most ad-hoc.

JawnV6
Jul 4, 2004

So hot ...

abraham linksys posted:

i reckon the real answer to all this is "find a new job where i can just write my goddamn javascript in peace instead of spending 50% of my day fixing esoteric issues in our tooling and infrastructure"

does that exist? i thought half the point of js was ignoring all tooling precedent

abraham linksys
Sep 6, 2010

:darksouls:
i mean i spend a lot of time configuring js build tools but fixing a webpack config or whatever is a simple task for one person and not, y'know, a massive devops undertaking

TwoDice
Feb 11, 2005
Not one, two.
Grimey Drawer

FamDav posted:

it's like the ocean spanning fiber vs the last mile of cable to your home. the ocean spanning fiber is huge and makes what you're trying to do possible, but the last mile is what actually connects everything together and is also the most ad-hoc.

yeah if your build rules/compilers stick timestamps and random numbers into your builds you're gonna have a bad time. (I'm looking at you __TIMESTAMP__)

FamDav
Mar 29, 2008
also if you read the Bazel mailing list where they're working through remote cache and distributed build they are very much concerned about reproducibility because they can't guarantee it if your build environment is different (or even different Dev machines).

I know I'm sperging about this but seriously it's like claiming stronger consistency than your system actually has. don't do it.

TwoDice
Feb 11, 2005
Not one, two.
Grimey Drawer

FamDav posted:

also if you read the Bazel mailing list where they're working through remote cache and distributed build they are very much concerned about reproducibility because they can't guarantee it if your build environment is different (or even different Dev machines).

I know I'm sperging about this but seriously it's like claiming stronger consistency than your system actually has. don't do it.

yeah I don't know how much of the internal systems' reproducibility made it into bazel

FamDav
Mar 29, 2008

TwoDice posted:

yeah I don't know how much of the internal systems' reproducibility made it into bazel

i dont think the "everythings builds against the same environment" bit made it in, though they do support (optionally) versioning your compiler toolchain which is good.

my wish for us is that we'd modernize and release a version of our distributed build system because i see the open sourcing of bazel as a step towards Google creating a cloud service around it. we've been doing this kind of thing since like 2001 :/.

fritz
Jul 26, 2003

https://github.com/miloyip/nativejson-benchmark

fritz
Jul 26, 2003

Shaggar posted:

use logging anywhere you would use a print statement. logger.Debug("whatever {0}",boner) is just as easy as Console.Trace("whatever {0}",boner)

ive finally managed to shout most of my co-workers into using our internal logging package (based on spdlog) instead of just using cout and / or printf everywhere

distortion park
Apr 25, 2011


At London tech startup fair, will report back.

E:scratch that, turns out they hadn't been entirely transparent about ticketing. Looked pretty low budget anywya.

distortion park fucked around with this message at 19:33 on Feb 17, 2016

tef
May 30, 2004

-> some l-system crap ->
lol it's at a boys school

JawnV6
Jul 4, 2004

So hot ...

abraham linksys posted:

but i mean really the answer is "they take long enough that we end up with a queue with the last half of hour pushes worth of builds in it" which is silly when most of these pushes are like "change one test in one service"

the answer should be to concatenate changes

when one docker build-push-test cycle finishes, take all commits since then and start another docker build. if the majority are minor issues that don't affect one another, you get to check off that the whole batch works and only run each commit individually when one fails and you binary search the list for the offender

distortion park
Apr 25, 2011


first presentation is from sage lol

tef
May 30, 2004

-> some l-system crap ->
there's eight employers listed

if you want to build a mobile app / web app hosted in the cloud

https://www.siliconmilkroundabout.com/ is cheesy but at least people go to it

distortion park
Apr 25, 2011


tef posted:

lol it's at a boys school

it's super low budget. Adobe reader just ran out of memory 5 min in

distortion park
Apr 25, 2011


tef posted:

there's eight employers listed

if you want to build a mobile app / web app hosted in the cloud

https://www.siliconmilkroundabout.com/ is cheesy but at least people go to it

that's not on tonight though! I regret this already though, it's real bad

FamDav
Mar 29, 2008

pointsofdata posted:

it's super low budget. Adobe reader just ran out of memory 5 min in

what twilight zone are youin

FamDav
Mar 29, 2008

JawnV6 posted:

the answer should be to concatenate changes

when one docker build-push-test cycle finishes, take all commits since then and start another docker build. if the majority are minor issues that don't affect one another, you get to check off that the whole batch works and only run each commit individually when one fails and you binary search the list for the offender

this + pipeline. if you're attempting to ex. run integration tests in parallel with your build and unit tests, you should think about only running them (on a batch of changes) after you've verified build and test.

distortion park
Apr 25, 2011


FamDav posted:

what twilight zone are youin

5th tier startup zone

JawnV6
Jul 4, 2004

So hot ...

FamDav posted:

this + pipeline. if you're attempting to ex. run integration tests in parallel with your build and unit tests, you should think about only running them (on a batch of changes) after you've verified build and test.

we had a check at build, a check for a DOA test suite, then the general regression assuming those passed, each step farming out to clusters of 10k. this was a decade ago on ASIC's but it's fun to hear SW folks hit similar solutions

and the js folks thinking it's greenfield lol

prefect
Sep 11, 2001

No one, Woodhouse.
No one.




Dead Man’s Band

it's a shame there are so few C/C++ json parsers :rolleye:

necrotic
Aug 2, 2005
I owe my brother big time for this!
after a little playing with elm im starting to get the hang of things. one thing is really bothering me, however: the debugger does not work with StartApp.start, only StartApp.Simple.start. the randomGif sample in the arch repo shows the issue https://github.com/evancz/elm-architecture-tutorial/tree/master/examples/5 (debug mode fails with some forEach error)

my app will have a whole bunch of effects, and this makes it impossible to debug poo poo.

how the hell can i debug an app with effects?

JawnV6
Jul 4, 2004

So hot ...

MononcQc posted:

(I haven't read the book) that's because determinism is a thing you associate with predictability and lack of bugs, which are proxies to, but not sufficient to be synonymous with quality, no?

my view is shaped by the belief that as software systems grow more and more complex and as the stakes get higher, SW development is going to look a lot like ASIC development. NASA occupies another point on the continuum of dev approaches that's closer to ASIC.

asic development is sw development with these changes:
1) compilation takes 8 weeks and millions of dollars
2) pre-compile verification is done on small subsets or at an extremely slow speed (1 Hz), but with determinism and full visibility
3) post-compilation verification runs at full speed (4Ghz), but without determinism and with reduced visibility

other changes grow from those points, like pre-si testing is often a white box approach because you can't afford to NOT run things that target new features, post-si testing is often black box because they're running real applications and can't afford to learn about every minor feature that might affect their testing. but i kinda came around to the view that determinism is a more defining issue than visibility or box color.

when you have an issue reproduced in a deterministic environment, coming to the statement that something is "fixed" is inherently more rigorous. most debug efforts in post-si amount to trying to reproduce it in the pre-si environment. but it's never guaranteed to terminate. you might have a bug that clearly manifests on real systems that doesn't ever show up across years of direct & directed random testing. it's real, and must be fixed. on the 100 test systems in the lab, if all are configured and tasked with hunting this bug, you're lucky to hit it once a week. if there's a defeature option that's relevant to the failing scenario, toggling it doesn't say much. maybe it fixed it, maybe it made it twice as hard to hit so 1 failure happens only every other week.

i kinda think that extends to the real software dev. everything's deterministic in theory or could be made such at the cost of performance, but when bugs crop up at full scale/speed the option of "only do 1 transaction at a time" isn't viable and finer-grained solutions are a necessity, often falling into that same valley of "we don't understand the failure mechanism, but need to push some fix out soon"

so I don't associate determinism with a "lack of bugs," it's closer to "confidence in fixes" but im pretty sure im wholly wrong about where we're going and huge teams delivering rigorous products is going to give way to smaller teams delivering fault-tolerant products

Adbot
ADBOT LOVES YOU

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

JawnV6 posted:

my view is shaped by the belief that as software systems grow more and more complex and as the stakes get higher, SW development is going to look a lot like ASIC development. NASA occupies another point on the continuum of dev approaches that's closer to ASIC.

asic development is sw development with these changes:
1) compilation takes 8 weeks and millions of dollars
2) pre-compile verification is done on small subsets or at an extremely slow speed (1 Hz), but with determinism and full visibility
3) post-compilation verification runs at full speed (4Ghz), but without determinism and with reduced visibility

other changes grow from those points, like pre-si testing is often a white box approach because you can't afford to NOT run things that target new features, post-si testing is often black box because they're running real applications and can't afford to learn about every minor feature that might affect their testing. but i kinda came around to the view that determinism is a more defining issue than visibility or box color.

when you have an issue reproduced in a deterministic environment, coming to the statement that something is "fixed" is inherently more rigorous. most debug efforts in post-si amount to trying to reproduce it in the pre-si environment. but it's never guaranteed to terminate. you might have a bug that clearly manifests on real systems that doesn't ever show up across years of direct & directed random testing. it's real, and must be fixed. on the 100 test systems in the lab, if all are configured and tasked with hunting this bug, you're lucky to hit it once a week. if there's a defeature option that's relevant to the failing scenario, toggling it doesn't say much. maybe it fixed it, maybe it made it twice as hard to hit so 1 failure happens only every other week.

i kinda think that extends to the real software dev. everything's deterministic in theory or could be made such at the cost of performance, but when bugs crop up at full scale/speed the option of "only do 1 transaction at a time" isn't viable and finer-grained solutions are a necessity, often falling into that same valley of "we don't understand the failure mechanism, but need to push some fix out soon"

so I don't associate determinism with a "lack of bugs," it's closer to "confidence in fixes" but im pretty sure im wholly wrong about where we're going and huge teams delivering rigorous products is going to give way to smaller teams delivering fault-tolerant products

this is a good and interesting post

  • Locked thread