|
Jazerus posted:it's because java's getter/setter implementation brain damaged an entire generation by requiring explicit getButts/setButts calls, instead of making them a language feature for easily overloading access and assignment like in a sane language My coworker has an immutability stick shoved so far up his rear end that in order to change a field on any object, a new object needs to be created first. I blame lombok for allowing him to do this with annotations instead of having to manually write out all this bullshit code. Let me tell you how fun it is to debug code when you can't watch anything.
|
# ? May 31, 2019 15:55 |
|
|
# ? Jun 7, 2024 06:44 |
|
poemdexter posted:My coworker has an immutability stick shoved so far up his rear end that in order to change a field on any object, a new object needs to be created first. I blame lombok for allowing him to do this with annotations instead of having to manually write out all this bullshit code. Let me tell you how fun it is to debug code when you can't watch anything. Sounds like you're getting the worst of both worlds. Having no mutable state in your application is extremely cool and good and makes debugging a breeze.
|
# ? May 31, 2019 16:29 |
|
If you use our product as a glorified filesystem browser it's faster!
|
# ? May 31, 2019 17:09 |
|
Clanpot Shake posted:Sounds like you're getting the worst of both worlds. Having no mutable state in your application is extremely cool and good and makes debugging a breeze. He's young and prefers implementing patterns everywhere. One of his codebases is incredibly large filled with mappers and entities and models. Can't just pass data from one place to another inside the application! My personal favorite is this gem: code:
|
# ? May 31, 2019 17:58 |
|
poemdexter posted:He's young and prefers implementing patterns everywhere. One of his codebases is incredibly large filled with mappers and entities and models. Can't just pass data from one place to another inside the application! Oh, I long for the days when I too thought that writing this would win me brownie points with other coworkers.
|
# ? May 31, 2019 18:08 |
|
Turns out the only point of implementation of that class was something that used two enums: BatchProcess and BatchProcessEnum. Both of the enums have the same values inside. I just had a long discussion with team lead and the dev who did this about it. They are super anal about keeping the request layer separate from the app layer separate from the db layer. I'm all for this if we're talking about logic but this is just objects that aren't different. They are POJOs with slightly different field names. And on top of it, mappers that go from one to the other. The cherry on top is that most of the controller layer is autogenerated by swagger. I asked what happens when someone alters those files to add additional logic and then later someone changes the API and regenerates the controllers? I just got a shrug as a response. I'm so happy I'll never touch that codebase.
|
# ? May 31, 2019 18:35 |
|
NihilCredo posted:If is is 'the greatest piece of bad code you've ever seen' you've lived either a charmed or a short life.
|
# ? May 31, 2019 19:55 |
|
CapnAndy posted:Oh, c'mon. It is an entire file, with its own custom namespace, with a custom class you have to instantiate, with carefully laid out regions, all to store a single string that you could just declare literally anywhere if you needed it. I've never seen something so idiotically beautiful. There's plenty of stupid stuff in that class: the regions, not using auto-properties, the pointless constructor declaration, but there's absolutely no inherent horror in wrapping a string in a custom type. Maybe the way it's used makes it stupid, but that's not demonstrated from that example.
|
# ? May 31, 2019 20:38 |
|
I don't know I'm kind of with CapnAndy; the act itself isn't that sacrilegious but the lengths went to to actually do it seem to be.
|
# ? May 31, 2019 20:49 |
|
The boilerplate stuff is excessive and pointless, but, if anything, the class doesn't take the idea far enough. If it were immutable with some basic input validation in the constructor, then it'd be a definite improvement over just passing strings around. So maybe the real horror is that they half-assed it
|
# ? May 31, 2019 21:00 |
|
There's nothing terribly wrong with creating a class "just" to hold a string, if you're going to use that class to enforce type-safety (you may use that class somewhere you couldn't just use a string, to accomplish something specific). All the extra junk added around the thing is a bit silly, of course. And it's arguably taking things to an extreme to wrap a single primitive value in that way. But it is defensible.
|
# ? May 31, 2019 22:46 |
|
"This is the coding horrors thread! Away with your at best mildly perturbing code."
|
# ? May 31, 2019 23:56 |
|
Is posting stuff from Twitter cheating? https://twitter.com/Zorchenhimer/status/1134178911628267520
|
# ? Jun 1, 2019 18:34 |
What if I need to reuse my Boolean compare, huh? Did you consider that?
|
|
# ? Jun 1, 2019 18:37 |
|
Joda posted:What if I need to reuse my Boolean compare, huh? Did you consider that? Look at it again
|
# ? Jun 1, 2019 18:59 |
|
Hammerite posted:Look at it again That's actually a pretty great layered horror.
|
# ? Jun 1, 2019 19:07 |
Hammerite posted:Look at it again Holy poo poo
|
|
# ? Jun 1, 2019 19:14 |
|
Forget all that. The real horror is that the compare method should return an int.
|
# ? Jun 1, 2019 20:37 |
|
i am like a factory for bugs like that which is why i write so many tests
|
# ? Jun 1, 2019 20:42 |
|
DONT THREAD ON ME posted:i am like a factory for bugs like that which is why i write so many tests I've seen code about this bad with perfect unit test coverage.
|
# ? Jun 1, 2019 22:41 |
|
Doom Mathematic posted:I've seen code about this bad with perfect unit test coverage. Because test coverage doesn't tell you anything about the quality or correctness of code, which is why people need to stop treating it as if it does.
|
# ? Jun 1, 2019 23:13 |
|
New Yorp New Yorp posted:Because test coverage doesn't tell you anything about the quality or correctness of code, which is why people need to stop treating it as if it does. Middle managers need metrics to show to their managers. Numbers that hopefully go up. The numbers themselves are meaningless therefore, anything that they can put in there helps. Code coverage is one such number. Lines of code was another (thank god they know better now).
|
# ? Jun 1, 2019 23:18 |
|
I noticed the other day that I'd turned something like return foo.split(",") to return foo.split(",") + bar.split(",") and the test coverage would have stayed 100% anyway had I not already written tests to confirm the new logic. Then I started to wonder if I should rewrite the function in a way that would have made the test coverage drop, had I not already added the necessary tests. Then I wondered what kind of antipattern that is.
|
# ? Jun 2, 2019 00:05 |
|
Doom Mathematic posted:I've seen code about this bad with perfect unit test coverage. who said anything about coverage?
|
# ? Jun 2, 2019 00:06 |
|
CPColin posted:I noticed the other day that I'd turned something like return foo.split(",") to return foo.split(",") + bar.split(",") and the test coverage would have stayed 100% anyway had I not already written tests to confirm the new logic. Then I started to wonder if I should rewrite the function in a way that would have made the test coverage drop, had I not already added the necessary tests. Then I wondered what kind of antipattern that is. Test coverage is only part of the picture. It’s still up to you and the person reviewing your PR to determine that the tests are testing relevant things and that the tests will fail when you break the thing. If the tests still pass if the function does the needful and fail if it doesn’t, that seems fine?
|
# ? Jun 2, 2019 00:29 |
|
The function takes an object that contains those foo and bar values, so the signature didn't change, which would have made the tests fail to compile. If I had not touched the tests, they would still have passed with 100% code coverage. That felt weird.
|
# ? Jun 2, 2019 00:44 |
|
like that code snippet is magnificently bad but it's also the kind of thing i've refactored myself into under a deadline and only got caught because i decided to test the actual meat of my code.
|
# ? Jun 2, 2019 00:53 |
|
UraniumAnchor posted:Is posting stuff from Twitter cheating?
|
# ? Jun 2, 2019 01:54 |
|
It returns 0 when the two objects are equal, 1 when the first object is bigger than the second, if your tests don't cover the third case then it looks a lot like a Compare method
|
# ? Jun 2, 2019 11:51 |
|
More a Coding-oh-my-god-I'm-too-old-for-this I get invited to give a slack channel to give advice on a particularly crusty old system that a team of shiny new consultants are replacing. This system has been making about $3 million a year, for maybe a weeks dev work annually for the last 7 years. The replacement project is budgeted to run a year with a team of 3 dev-pairs + senior + qa + delivery lead, etc. slack posted:contractor1: We've fixed the tests to cover the bugs in the CSV generator
|
# ? Jun 2, 2019 12:18 |
|
quote:The function takes an object that contains those foo and bar values, so the signature didn't change, which would have made the tests fail to compile. If I had not touched the tests, they would still have passed with 100% code coverage. That felt weird. I didn't understand what "100% code coverage" means so I googled "test coverage metric" and it came back with this: Google posted:People also ask is that what you mean? It seems clear that that's a stupid metric since it tests only whether a line of code is touched, not whether all combinations of possible program conditions/parameter states (qualitatively defined) are tested. So it isn't any surprise that new tests had to be added to catch the fact that the logic had changed (now we need a new test to make sure that if there is anything in bar, then that extra stuff gets included in the return value). I'm just trying to figure out what you mean, because I was having trouble understanding. It sounds like you're assigning a lot of weight to this "test coverage" metric which is, as already remarked, a stupid metric. But maybe that was the point of your anecdote in the first place. Trammel posted:More a Coding-oh-my-god-I'm-too-old-for-this did you ask them what "the library is too big" means?
|
# ? Jun 2, 2019 13:27 |
|
Hammerite posted:did you ask them what "the library is too big" means? Not my project. Personally if I can retire without ever having written a CSV parser or generator on the job, I'll be pretty satisfied. Still, the consulting company bills by the hour, so I guess they're happy too.
|
# ? Jun 2, 2019 14:16 |
|
New Yorp New Yorp posted:Because test coverage doesn't tell you anything about the quality or correctness of code, which is why people need to stop treating it as if it does. Yes, nor do unit tests in general. Not if they're written after the implementation by people who don't have a clear idea of what they're doing. My point is, even if every possible behaviour in the codebase is rigorously tested, the code and the tests can still be wrong together. CPColin posted:I noticed the other day that I'd turned something like return foo.split(",") to return foo.split(",") + bar.split(",") and the test coverage would have stayed 100% anyway had I not already written tests to confirm the new logic. Then I started to wonder if I should rewrite the function in a way that would have made the test coverage drop, had I not already added the necessary tests. Then I wondered what kind of antipattern that is. This is the textbook use case for TDD, writing the failing test first and then fixing the code to make the test pass.
|
# ? Jun 2, 2019 14:22 |
|
New Yorp New Yorp posted:Because test coverage doesn't tell you anything about the quality or correctness of code, which is why people need to stop treating it as if it does. If rather walk into a codebase written by average developers with 100% test coverage than any number under 100. Really smart and experienced developers know what's important to test, what responsibilities belong to the code, where the test boundaries lie, and what the best balance between unit tests and integration tests are. Average developers get stuck writing the test description, and cut'n'paste it from somewhere else.
|
# ? Jun 2, 2019 14:24 |
|
In my experience coverage is a useful metric if you can avoid thinking of it as more coverage === more good. Knowing that a file has 100% coverage doesn't tell you there are no bugs, but knowing a file has 50% coverage tells you there's a bunch of code that no one is checking at all. I also like looking at incremental coverage (i.e. change in coverage from a particular CL). Again, you can't read it like negative incremental coverage === auto reject, but if you're writing untested code there should probably be a reason. As a reviewer seeing negative incremental coverage is a signal to look more closely for this reason.
|
# ? Jun 2, 2019 14:42 |
|
it's because it's more efficient to just write the code without any bugs in the first place, so obviously that is what i, a perfect logical machine, always do if it produces the wrong answers then that's probably your fault for running it in the wrong reality
|
# ? Jun 2, 2019 15:09 |
|
Nippashish posted:In my experience coverage is a useful metric if you can avoid thinking of it as more coverage === more good. Knowing that a file has 100% coverage doesn't tell you there are no bugs, but knowing a file has 50% coverage tells you there's a bunch of code that no one is checking at all. Yeah, this is a good post. If you're a good developer (meaning you know your limits, not that you're a Rock Star Programmer) test coverage gives you useful information and on top of that it gives you something to throw at managers! Is there a good way of perfectly knowing how good and useful the tests are for a project? I can't think of a way that doesn't involve being an omniscient AI, so all we can do is make the best decisions we can with the snippets of information we have.
|
# ? Jun 2, 2019 15:13 |
|
Hammerite posted:did you ask them what "the library is too big" means? dude some of these apache things are, like, hundreds of kb. maybe even mefabytes. across dozens of vms it ccan really add up
|
# ? Jun 2, 2019 15:15 |
|
Thermopyle posted:Is there a good way of perfectly knowing how good and useful the tests are for a project? ... Knowing how "good and useful" the tests are is typically going to involve making an assessment of which sets of input represent truly distinct cases that need to be individually accounted for with their own tests. So no, there is always going to be an element of qualitative assessment except in the very most trivial of cases. There is no substitute for someone thinking "now what are all the ways this can go wrong" NB. I'm going to immediately contradict myself and point out that mutation testing is sort of a way of measuring how good tests are. but it is difficult to use effectively from what I understand
|
# ? Jun 2, 2019 16:01 |
|
|
# ? Jun 7, 2024 06:44 |
|
Thermopyle posted:Is there a good way of perfectly knowing how good and useful the tests are for a project? I can't think of a way that doesn't involve being an omniscient AI, so all we can do is make the best decisions we can with the snippets of information we have. I've heard exciting things about mutation testing, but I don't know a whole lot about it. I know it doesn't do a whole lot in the case where the code is objectively wrong and the tests have been written to test that objectively wrong behaviour, but it does do something for assuring that the tests are actually preventing behaviour changes, as opposed to just wasting cycles on meaningless assertions.
|
# ? Jun 2, 2019 16:03 |