|
Why not just be one of those cowboys? Sounds much more fun than your current role of cleaning up cowboy poo poo.
|
# ? Oct 27, 2022 00:08 |
|
|
# ? May 28, 2024 00:10 |
|
Because I still have my empathy gland.
|
# ? Oct 27, 2022 00:48 |
|
The most frustrating thing I find about tests is that so much of it is in subjective grey areas. There's all this advice "do it this way, but sometimes it's better to do it that way", "you can write too {many,few} tests", "flakey tests are bad but can sometimes be better than nothing", etc, but it seems to be a total art to drawing where those boundaries are. With almost every other part of programming the app itself, I can have a reasonable discussion about coding styles and code architecture. But when it comes to tests, all these decisions feel capricious. If I have a flakey test, do I nuke it, fix it, or ignore it? Based on what? If the answer is "how important is that scenario in the system under test?" then how does one measure that?
|
# ? Oct 27, 2022 02:32 |
|
I've definitely found that tough when mentoring junior people, I can offer a hundred opinions but no simple or straight answer to send them on their way
|
# ? Oct 27, 2022 02:46 |
|
If it costs more in maintenance than it saves in product risk, it should be removed. Both of these things are measurable.
|
# ? Oct 27, 2022 02:56 |
|
minato posted:The most frustrating thing I find about tests is that so much of it is in subjective grey areas. Nah, I don't find the principles subjective at all. You should make test tradeoffs based on their costs and their benefits. Otherwise you end with me arguing with my team lead until he walks away in frustration saying fine it doesn't matter since the app is probably gonna be deprecated in a couple years anyway. He wanted me to rewrite some tests to mock out other classes because the definition of a unit test is that it tests the smallest possible unit of code in isolation. This is a pretty good example imo, since there's a lot of people who just memorized a definition like "unit test" or a phrase like "test pyramid" without understanding what they're trying to do. For example, when considering the use of test doubles vs real implementations in test dependencies, a key factor is that using real implementations where possible is more likely to discover defects sooner because your "unit" tests (better put: "small tests") become mini-integration tests. Some complain they find it difficult to tell where exactly they broke something, as many tests tend to fail at the same time, and use that complaint in favor of mocks (or worse, they just think we should strictly use test doubles because that's the definition of a unit test), but this is in exchange for greater confidence that you're actually shipping something that works. Unless there's a particularly onerous price to pay, such that it's *obviously* too expensive to use real classes as test dependencies, I try to insist on using real classes. And wow that reminds me how different my TLs were. TL 1: If I tell you to do something a certain way and you can't see how it's obviously better than the alternatives, then ask me why. If I can't explain why it's clearly better then maybe we shouldn't be doing it that way. TL 2: We should mock out the dependencies because these are *unit tests*. You're familiar with the definition of a unit test, right? By definition, unit tests don't exercise any logic except their own. TL 3: Why do I want to do it this way? Please just do it this way. TL 3 I kind of understand because he has people from other teams constantly asking him stuff, I am not sure he really has any time to discuss code-level suggestions. But I wonder if being overworked is a sign that he should be in a higher position or something. leper khan posted:If it costs more in maintenance than it saves in product risk, it should be removed. Both of these things are measurable. I agree with the general idea here but what outcomes do you consider to be "product risk"? So far I approach these tiny decisions with the standpoint of, if none of the factors obviously greatly outweigh the others, prioritize product quality / reliability (or if we have some other documented default decision, make that default decision). oliveoil fucked around with this message at 03:23 on Oct 27, 2022 |
# ? Oct 27, 2022 03:09 |
|
How expensive is it if that particular thing has a bug. In time to resolve, reputation of the product/org, legal compliance, etc etc. The weights of failing to comply with GDPR delete requests aren't the same as misordering some photos. If removing a test, and then failing that test results in total service failure for 3 weeks, that's different from a feature .4% of user sessions interact with being down. You can put dollar costs to these things. The probability of an error happening without a test is then a random variable based on other knowable factors like defect rate etc
|
# ? Oct 27, 2022 03:37 |
|
minato posted:The most frustrating thing I find about tests is that so much of it is in subjective grey areas. There's all this advice "do it this way, but sometimes it's better to do it that way", "you can write too {many,few} tests", "flakey tests are bad but can sometimes be better than nothing", etc, but it seems to be a total art to drawing where those boundaries are. The answer is never ignore it. Don't ignore failing tests for the same reason you don't ignore compiler warnings - when people get used to ignoring things, they miss the important failures. Fixing it or nuking it are the only options. Similarly, either fix your compiler warnings or turn the ones you're ignoring off so devs can tell when the code they're (we're) writing generates new warnings. What's a "flakey test"? If you mean it fails intermittently without any change in the relevant code, that's an indication of some sort of timing or concurrency issue and needs to be tracked down. Otherwise you'll eventually have an urgent customer complaint that takes a month and a half to fix because nobody can reproduce it. With management angry and breathing down your neck the whole time. If you mean it frequently fails after PRs, then the most likely explanation is that at least two people on the team aren't doing their jobs - the dev who checked in the breaking change and the reviewer who approved it. I recommend making passing unit tests part of the requirement to get a PR approved. Properly, this is a leadership problem, and whoever is providing technical leadership needs to explain why this isn't acceptable and be backed up by management, but automating running the tests makes it clearer who needs to be talked to. If you mean a test that's changed frequently, then you might have a bad test. Bad tests are often an indication of bad code that's trying to do too much and needs to broken up into distinct, better understood, less buggy pieces. Which, incidentally, are easier to write good, stable tests for. Or you might have a crazy test framework that requires emulating clicking a button through the UI instead of just calling the function the button does like one goon said they did
|
# ? Oct 27, 2022 03:46 |
|
minato posted:The most frustrating thing I find about tests is that so much of it is in subjective grey areas. There's all this advice "do it this way, but sometimes it's better to do it that way", "you can write too {many,few} tests", "flakey tests are bad but can sometimes be better than nothing", etc, but it seems to be a total art to drawing where those boundaries are. Testing is an art and it's something you get better at with experience. I know that's not an incredibly helpful answer but it's the truth.
|
# ? Oct 27, 2022 03:55 |
|
leper khan posted:How expensive is it if that particular thing has a bug. In time to resolve, reputation of the product/org, legal compliance, etc etc. Nice, that's a very clear way to do it. Thank you.
|
# ? Oct 27, 2022 04:10 |
|
leper khan posted:If it costs more in maintenance than it saves in product risk, it should be removed. leper khan posted:Both of these things are measurable. I don't think they're always measurable, I don't think they're often measurable, and when they are, I doubt many people make the effort to measure them. I agree you can put a $ cost to some things (like GDPR compliance), but even then, that's not something I'd expect every engineer to know. I don't feel most engineers are writing tests with direct tangible & measurable benefits. When I'm authoring any test, I'm making many small judgement calls with poor data and zero rigor. If I'm lucky I'm asking: - how much is this going to cost to write? (Code might need to be refactored if it's not in a testable state) - What will the program do if no test existed to catch this scenario? Crash or do something crazy, or just be a cosmetic problem? - What's the business impact (in $$) or loss of customer goodwill this test is saving us? - how likely is the system under test going to change in the future? (This affects my ongoing maintenance costs.) - How fast can we deploy a fix if I don't bother to write a test here? If I'm looking down the barrel of a failing/flakey test: - What's the estimated cost of fixing this? 5 mins or a week or more? - How important is this test anyway? (Whoever wrote it didn't comment explaining their judgement, so now I'm second-guessing them. And maybe the situation has changed since the test was written.) - How flakey is this? If it's < 1% can I just get away with wrapping it in a "retry 5x until pass" wrapper or is that going to piss off the test purists? - How expensive is this to run? (We have some CI E2E tests that take hours) Can I just nuke it? I don't think any of these judgement calls would be consistently applied across tests, let alone across an engineering team, or over time. Unless you're building Therac-25's successor or some PCI processing code, I feel I might as well answer them by rolling dice. (Hell, I'm guilty of simply nuking troublesome tests just because I was tired and grumpy. Not a very principled way to run a codebase.) ultrafilter posted:Testing is an art and it's something you get better at with experience. I know that's not an incredibly helpful answer but it's the truth.
|
# ? Oct 27, 2022 05:07 |
|
FYI if your unit tests are from the entry point of the application, and the only thing you mock are your downstream/external calls, then your tests will be less fragile (they won't break every time something changes), a broken test will be a more meaningful thing to look out for, you can cover more lines/branches with fewer tests, and you won't have to spend as much time fixing or rewriting tests every time you refactor your tech debt.
|
# ? Oct 27, 2022 05:34 |
|
The goofy thing I say about unit tests is that people fixate on the test when they should fixate on the unit. Having some separated, independent parts of the code makes it easier to mentally compartmentalize when developing against it, so I don't have to sit there in a pile of anxiety, afraid of all the poo poo I don't know about that I'm not doing that will gently caress everything up. I also am thrilled when I come across a project I have to work against that has some tests that pass because it shows me a certain intent for the code and that doing certain things will have certain, expected outcomes. Without that, it's who-knows-what going on. I've had cases where I could demonstrate after the fact that the code never actually did anything despite somebody getting an award for it. I feel like I could get into the game of stupid software engineering thought leader stuff if I published a book called "Anxiety Oriented Programming" where I make the thesis that the main goal of any development process should be to reduce the anxiety in the project and instill some sense of confidence, and anything in the process that instead increases that anxiety needs to just get thrown the gently caress out.
|
# ? Oct 27, 2022 05:45 |
|
I got a unit you can fixate on
|
# ? Oct 27, 2022 05:55 |
|
Test marked as ignored.
|
# ? Oct 27, 2022 07:02 |
|
gbut posted:Man, I'm so done working with cowboys. My place has many, they "get poo poo done", and then the rest of us have to fix it for months and years afterwards. I ended up loaned out to a single-dev-part-time project to help out, because my team is currently way ahead of schedule. I have been contemplating murder ever since, because the guy is exactly the sort of person who adds 50 lines of weird code into existing (well tested) class, adds no tests and the commit message is "fix".
|
# ? Oct 27, 2022 10:27 |
|
my team hasn't written a single mock for a class that we own and I'm proud of this. In my experience, mocking just leads to 'interaction' tests that are brittle to changes and not meaningful. Tests like 'if I call the function under test, Mock.Foo() gets called once and then Mock.Bar() gets called twice' literally make it more difficult to refactor the code.Rocko Bonaparte posted:I feel like I could get into the game of stupid software engineering thought leader stuff if I published a book called "Anxiety Oriented Programming" where I make the thesis that the main goal of any development process should be to reduce the anxiety in the project and instill some sense of confidence, and anything in the process that instead increases that anxiety needs to just get thrown the gently caress out. It's not exactly the same concept, but this kinda reminds me of "risk driven design": https://www.georgefairbanks.com/software-architecture/risk-driven-model/ tldr: the amount of up front time you spend on design should be related to how bad it would be if you screw up.
|
# ? Oct 27, 2022 12:50 |
|
CPColin posted:I got a unit you can fixate on CANNOT REPRODUCE
|
# ? Oct 27, 2022 13:01 |
|
Rocko Bonaparte posted:I feel like I could get into the game of stupid software engineering thought leader stuff if I published a book called "Anxiety Oriented Programming" where I make the thesis that the main goal of any development process should be to reduce the anxiety in the project and instill some sense of confidence, and anything in the process that instead increases that anxiety needs to just get thrown the gently caress out.
|
# ? Oct 27, 2022 13:18 |
|
Love Stole the Day posted:FYI if your unit tests are from the entry point of the application, and the only thing you mock are your downstream/external calls, then your tests will be less fragile (they won't break every time something changes), a broken test will be a more meaningful thing to look out for, you can cover more lines/branches with fewer tests, and you won't have to spend as much time fixing or rewriting tests every time you refactor your tech debt. Lots of people will refuse to call this a “unit” test, fwiw.
|
# ? Oct 27, 2022 13:31 |
|
Rocko Bonaparte posted:Test marked as ignored. Itaipava posted:CANNOT REPRODUCE PR approved
|
# ? Oct 27, 2022 14:14 |
|
LGTM (let's get that money)
|
# ? Oct 27, 2022 14:22 |
|
Xarn posted:I ended up loaned out to a single-dev-part-time project to help out, because my team is currently way ahead of schedule. I have been contemplating murder ever since, because the guy is exactly the sort of person who adds 50 lines of weird code into existing (well tested) class, adds no tests and the commit message is "fix". The only way I found so far against that crap is not allowing coverage drop on PRs. I don't care if we cover only 23% of the code, if you make it 22.9%, it's not going in, buddy.
|
# ? Oct 27, 2022 15:03 |
|
gbut posted:The only way I found so far against that crap is not allowing coverage drop on PRs. I don't care if we cover only 23% of the code, if you make it 22.9%, it's not going in, buddy. This is how you get malicious compliance though.
|
# ? Oct 27, 2022 15:09 |
|
That's a risk, sure, but they are too lazy to do it so they pawn off the test writing to juniors to wrap up their projects. And juniors are to scared to lawyer about.
|
# ? Oct 27, 2022 16:36 |
|
raminasi posted:Lots of people will refuse to call this a “unit” test, fwiw. I have found that people who care if something is or isn't a "unit" test pretty universally don't have any thoughts on testing worth listening to.
|
# ? Oct 27, 2022 17:14 |
|
Plorkyeran posted:I have found that people who care if something is or isn't a "unit" test pretty universally don't have any thoughts on testing worth listening to. I care, just because I've encountered so many cases of people describing things that don't fit any reasonable definition of a unit test as unit tests. There's a lot of room within the definition of the term to quibble over irrelevant details, but I've literally encountered teams who call their manual QA department "unit tests" so yeah I care about making the distinction clear.
|
# ? Oct 27, 2022 17:20 |
|
Plorkyeran posted:I have found that people who care if something is or isn't a "unit" test pretty universally don't have any thoughts on testing worth listening to. developer_smells.txt
|
# ? Oct 27, 2022 17:22 |
|
New Yorp New Yorp posted:I care, just because I've encountered so many cases of people describing things that don't fit any reasonable definition of a unit test as unit tests. There's a lot of room within the definition of the term to quibble over irrelevant details, but I've literally encountered teams who call their manual QA department "unit tests" so yeah I care about making the distinction clear. Why does it matter if someone wants to call their QA department unit tests?
|
# ? Oct 27, 2022 17:39 |
|
Plorkyeran posted:Why does it matter if someone wants to call their QA department unit tests? Because it should be "test units"
|
# ? Oct 27, 2022 17:40 |
|
Plorkyeran posted:Why does it matter if someone wants to call their QA department unit tests? If I ask "Do you have unit testing for all your software" and they answer "Yes" meaning "We have some guy who follows a script to test things out manually", then I'll write them off as liars or salespeople. If they say "Yes" meaning they have an automated system doing end to end testing and covers the overall functionality of large chunks of code rather than testing 50 line functions or whatever, then that's cool and they are cool people.
|
# ? Oct 27, 2022 17:49 |
|
You should probably be asking if they have automated testing if you don't care what form that testing takes as long as it's not manual QA.
|
# ? Oct 27, 2022 18:02 |
|
LLSix posted:What's a "flakey test"? It could (and frequently is), be a problem in the test itself. It's very common with tests that do a lot of not very precise async stuff, like Selenium tests. So, the cost-benefit analysis can be "do I spend hours, possibly days trying to improve this piece of poo poo test that will probably go back to being flakey next time someone changes something, or do I spend 3 seconds telling the build server to retry the build." Bonus points if you're working in a monorepo and the tests or the stuff they're testing is the responsibility of another team halfway across the globe.
|
# ? Oct 27, 2022 18:58 |
|
Plorkyeran posted:Why does it matter if someone wants to call their QA department unit tests? Because they told the FDA that their developers were going to unit test as part of their process, but they had a really tight deadline. So they got QA to "unit test" (read: run scripted manual tests that the devs were doing themselves) and check "pass/fail." The QA manager was happy because he could hire more QA contractors and have an army of tall white college men running around doing various menial tasks. The QA contractors were happy because they got paid overtime for performing an effectively pointless task. The devs were happy because they didn't have to run all the scripted manual tests that were useless anyway and could go back to coding and complaining. You can see how everyone involved in that fiasco of a project would clash with the "new" definition of unit tests (e.g. being automated, testing units, no side effects etc) - while you can hire people off the street to put checkmarks in checkboxes, having an external organization pitch in to "help" by writing unit tests for projects written without automated testing in mind is a futile process.
|
# ? Oct 27, 2022 18:59 |
|
thotsky posted:It could (and frequently is), be a problem in the test itself. It's very common with tests that do a lot of not very precise async stuff, like Selenium tests. So, the cost-benefit analysis can be "do I spend hours, possibly days trying to improve this piece of poo poo test that will probably go back to being flakey next time someone changes something, or do I spend 3 seconds telling the build server to retry the build." Bonus points if you're working in a monorepo and the tests or the stuff they're testing is the responsibility of another team halfway across the globe. I forgot that was a possibility because I've only seen the timing issue be in the test itself twice. I mostly do embedded with a sprinkling of desktop development, so I can believe it's more common in other domains. LLSix fucked around with this message at 20:18 on Oct 27, 2022 |
# ? Oct 27, 2022 20:16 |
|
Plorkyeran posted:Why does it matter if someone wants to call their QA department unit tests? Because words having meanings is the basis for all human communication. I honestly don't know how else to answer this question.
|
# ? Oct 27, 2022 20:32 |
|
Would it be any better if they called their QA department integration tests? The problem with that scenario isn't that they used the word "unit".
|
# ? Oct 27, 2022 22:34 |
|
Our company has this insane focus on demoing things, to the point where the overhead of making demos takes 2x the actual work. "Demo driven development".
|
# ? Oct 27, 2022 23:08 |
|
That must be a trend. We used to do team demos every sprint, and now they want every engineer to do a demo of their work. I hope my interview goes well.
|
# ? Oct 27, 2022 23:11 |
|
|
# ? May 28, 2024 00:10 |
|
Good Will Hrunting posted:Our company has this insane focus on demoing things, to the point where the overhead of making demos takes 2x the actual work. "Demo driven development". My company has 2-3 days per year, demo days. I have one coming up in 2 weeks and it's kinda dumb since nothing I do has any user interface. Gotta come up with some poo poo to demo, more than just: "see, if you run this command, then this will happen" on a linux console.
|
# ? Oct 27, 2022 23:11 |