|
Gul Banana posted:what is an ice box? For us it was a landing pad for stuff a client wanted but we may not have budget for after adding all required stories, then pruning them to a level we thought we could accomplish with our budget and time. Basically an out of scope holding pen.
|
# ? Aug 27, 2016 16:29 |
|
|
# ? May 9, 2024 21:46 |
|
My favorite so far is a project dreamt up by a junior guy. It had about 15 dead code paths, each due to using == instead of .equals for string comparison. It also had a bunch of "zombie" classes, like MyClass and MyClass2. It was clear that MyClass2 was a rewrite of MyClass, but he'd left one or two calls to MyClass in there by mistake I guess, who knows. It was amazing the code actually did anything at all, much less what it was "supposed" to.
|
# ? Aug 27, 2016 16:47 |
|
My Rhythmic Crotch posted:My favorite so far is a project dreamt up by a junior guy. It had about 15 dead code paths, each due to using == instead of .equals for string comparison. To be fair though that's an easy mistake to make that can be explained by inexperience. I kept making that mistake at work because there I write Java but came from C#. stringOne == stringTwo is a comparison in C# but is a reference equals in Java.
|
# ? Aug 27, 2016 17:59 |
|
vonnegutt posted:I like testing. If it's for pre-existing (legacy) code, it helps me understand what's going on more than a read through does, because I'm checking my assumptions. If it's for my own code, it proves that I've done the thing I tried to do. Yeah, I've been adding a pile of tests to the legacy code I inherited despite getting a lot of "That's not really part of the ticket..." because I have to verify that I understand this undocumented mess anyway so we may as well get some lasting benefit from it.
|
# ? Aug 27, 2016 18:50 |
|
The Leck posted:Ding ding ding! And the lack of any tests/documentation mean that just leaving most of the mess alone is the (short term) safest course of action. Occasionally someone comes along and yanks out a section of code into its own class where it should be, but it inevitably causes subtle to catastrophic bugs in functionality that the well-meaning person working on it wasn't even aware of. My approach to this is generally to break it down into individual (if large) chunks, shunt that code off to their own methods, and see if the broken-down methods reveal more about what the code is trying to do in a literate fashion.
|
# ? Aug 28, 2016 02:29 |
|
ToxicSlurpee posted:To be fair though that's an easy mistake to make that can be explained by inexperience. I kept making that mistake at work because there I write Java but came from C#. stringOne == stringTwo is a comparison in C# but is a reference equals in Java. Someone keep me honest here, but I thought the variance in behavior(compared to Java) is because *it doesn't matter* in C# due to the fact that identical strings are aliased under the hood? A quick peek at Linqpad suggests this to be true - ==, Equals(), and Object.ReferenceEquals() will all return true for two strings with the same value.
|
# ? Aug 28, 2016 06:14 |
|
Cuntpunch posted:Someone keep me honest here, but I thought the variance in behavior(compared to Java) is because *it doesn't matter* in C# due to the fact that identical strings are aliased under the hood? A quick peek at Linqpad suggests this to be true - ==, Equals(), and Object.ReferenceEquals() will all return true for two strings with the same value. Not quite - it's very possible to have two strings that are equal using == but are different instances. The behaviour you're seeing in Linqpad is due to string interning.
|
# ? Aug 28, 2016 06:40 |
|
Volmarias posted:Android PackageManagerService.java code:
|
# ? Aug 28, 2016 08:33 |
|
Cuntpunch posted:Someone keep me honest here, but I thought the variance in behavior(compared to Java) is because *it doesn't matter* in C# due to the fact that identical strings are aliased under the hood? A quick peek at Linqpad suggests this to be true - ==, Equals(), and Object.ReferenceEquals() will all return true for two strings with the same value. In c# strings are a primitive type. In Java they are not. In Java == is a reference comparison for string objects. That's the big difference; this is why == works that way for ints in both langauge.
|
# ? Aug 28, 2016 17:47 |
|
ToxicSlurpee posted:In c# strings are a primitive type. In Java they are not. In Java == is a reference comparison for string objects. That's the big difference; this is why == works that way for ints in both langauge. C# strings are not a primitive type. C#'s "trick" is that it has operator overloading. System.String provides an == overload that does value comparison instead of reference comparison. Reference types which don't provide an == overload will do reference comparison and value types will do value comparison.
|
# ? Aug 28, 2016 18:18 |
|
csammis posted:C# strings are not a primitive type. OK no you're right; I should have clarified that they behave like one. Strings are kind of bizarre from a programming standpoint.
|
# ? Aug 28, 2016 18:21 |
|
jony neuemonic posted:Yeah, I've been adding a pile of tests to the legacy code I inherited despite getting a lot of "That's not really part of the ticket..." because I have to verify that I understand this undocumented mess anyway so we may as well get some lasting benefit from it. A good argument for this is that it is a part of the ticket. Because an implicit part of every ticket is, "... while preserving the rest of the expected behavior in the system." Tests are a great way to do that and as you said they make everything easier when you have to revisit that section of the code. It's tough to stand firm when you're under management pressure to get a fix out the door now now now don't worry about making it perfect also let's skip over some of our deployment steps the client is calling every day oh god. I like to invoke past incidents to get them to back off. Like the time we had a stubborn bug in one section of our system. A really smart dev thought he had a solution, so he coded it up a change and did some basic manual testing and rolled it out as a hotfix. Phones started blowing up, oops one of our major clients (who used a different configuration than the client who had the original bug used) suddenly couldn't use the system. Okay roll back and dive back into the code. Repeat several times, until the dev got fed up and put the kibosh on more hotfixes until he was able to refactor and write better tests. The original logic was so arcane and so deeply rooted in so many parts of the system that nobody understood it and any change had the butterfly->hurricane effect. It's still pretty bad, but at least there are SOME tests around it.
|
# ? Aug 28, 2016 19:25 |
|
KoRMaK posted:I kicked a bunch of stuff back to the junior on the team that didn't pass code review. You wanna be my QA? Regarding tests, we have a ton of mock stuff but it's all based on real, obfuscated data (I work for a payroll company) and works really well for us. We are starting to use SpecFlow and some Jenkins stuff for front-end and it's drastically reduces the number of "ok this commit broke features x y and z".
|
# ? Aug 28, 2016 20:29 |
|
A lot of developers have problems internalizing the crazy idea that testing gets a lot faster when you actually get, you know, good at it.
|
# ? Aug 29, 2016 01:29 |
|
Vulture Culture posted:A lot of developers have problems internalizing the crazy idea that testing gets a lot faster when you actually get, you know, good at it. A lot of developers have probably also worked with nothing but giant balls of yarn as their dependency graphs for their whole career. "Writing tests takes too long and they take too long to run, because I have to set up the state of the database JUST SO and I have to get real data <here> and <here> so that the third party API calls will return the right messages." Just writing testable code is a skill people don't realize they don't have.
|
# ? Aug 29, 2016 02:13 |
|
Setting up a good mock database or even an entire distributed system for your tests can be a bit daunting, but for Java developers I've found that the Accumulo codebase has examples of how to perform integration tests that have gazillions of heavyweight dependencies. They wrote a small Hadoop cluster that can be programmatically initialized and configured from within integration tests that's pretty repeatable - that sounds crazy but once you read through the code you realize that it's not that bad when you actually understand the scope of testing and validation that needs to be done routinely and from there you can determine how to separate out your components to be easier to test parts in isolation. I can understand not having good tests if you don't have the time. Even without good examples of testable codebases haven't people heard of mock objects and design principles like the Law of Demeter or separations of concerns and such?
|
# ? Aug 29, 2016 02:48 |
|
i write tons of tests that aren't strictly necessary because the first thing i do before writing code is write tests that exercise the interface. even if my tests are plainly useless (like the 'assert(type(foo.bar), String)` example) they at least force me to consider how the code is going to be used and require that i actually have thought about the preconditions necessary to use the interface. this also encourages writing nice clean implementations with a minimum of dependencies or side effects i work with a bunch of 'test after' people and their poo poo is consistently broken and ridiculous to use
|
# ? Aug 29, 2016 03:17 |
|
im so confused about writing tests first having stuff that is "plainly useless" seems like its gonna dillute the signal with a lot of noise. do other peoiple see it?
|
# ? Aug 29, 2016 03:51 |
|
We had somebody come in, advocate test-driven design left and right, and write a caching class that has lots of interesting "quirks." And no tests.
|
# ? Aug 29, 2016 05:55 |
|
Vulture Culture posted:A lot of developers have problems internalizing the crazy idea that testing gets a lot faster when you actually get, you know, good at it. It was pretty fun when I sat down with a developer (I'm a subject-matter expert officially but I end up doing a fair amount of QA testing for reasons) and started going through a routine test case in our expert system that I've been doing regularly for two years by now, and their jaw dropped when I started tabbing through the fields, using all the shortcuts I'd learned, etc. I'm not a "real" tester but apparently I can test faster than our actual testers. I guess it helps that I've used the system for ages, that'd be why I'm the SME. And it's also easy to test fast when you wrote that particular test case yourself in the first place. It's muscle memory at this point.
|
# ? Aug 29, 2016 06:37 |
|
Antti posted:It was pretty fun when I sat down with a developer (I'm a subject-matter expert officially but I end up doing a fair amount of QA testing for reasons) and started going through a routine test case in our expert system that I've been doing regularly for two years by now, and their jaw dropped when I started tabbing through the fields, using all the shortcuts I'd learned, etc. I'm not a "real" tester but apparently I can test faster than our actual testers. I guess it helps that I've used the system for ages, that'd be why I'm the SME. And it's also easy to test fast when you wrote that particular test case yourself in the first place. It's muscle memory at this point. What. Are you doing this manually?
|
# ? Aug 29, 2016 07:10 |
|
KoRMaK posted:im so confused about writing tests first The only signal we care about in this case is "does the test pass?" and otherwise we aren't going to notice it. The benefits for him is as he described, though there's definitely an argument that can be made for leaving out the ones that are always going to pass as long as the class compiles successfully. Still, it doesn't really do any harm to the system. Even the seemingly trivial stuff can be useful, because if you make a change and a "trivial" test starts failing, you know you done hosed up.
|
# ? Aug 29, 2016 07:38 |
|
sarehu posted:What. We have automated most test cases. I don't know the nuts and bolts of why we haven't automated all of them because I'm the SME and not directly involved (and wouldn't probably understand the explanation either). Well there's actually one test case where you need to physically print out a document and the lines need to be just right. I do feature testing, some QA testing and exploratory acceptance testing manually whenever we do a major release because an epic pile of poo poo will roll downhill right at us if the deployed system isn't viable. I was actually just talking to a colleague last week about how while there's an existing plan to roll back a release we have never actually done it or even tried it. The wonders of working with highly specialized legacy business software. I'm only lurking this thread because we use half-assed scrum for development and my employer had me take product owner training recently, but with code I'm a complete novice. So I may use terms inappropriately or just the wrong terms outright. Sulphagnist fucked around with this message at 07:48 on Aug 29, 2016 |
# ? Aug 29, 2016 07:45 |
|
Finally getting around to fixing the git portion of our ancient code base. 4 gigs down to 125 megs. With a combination of a tiny python script, git bfg, and git lfs, magic happened here today.
|
# ? Aug 29, 2016 12:20 |
|
KoRMaK posted:im so confused about writing tests first the tests are checked in, sure. usually i separate boring 'did i remember to implement this?' tests from trickier 'does this do what i think it does?' tests into different files so most people probably never look at the 'useless' tests except maybe as examples of how to use the library/component
|
# ? Aug 29, 2016 12:36 |
|
the talent deficit posted:i write tons of tests that aren't strictly necessary because the first thing i do before writing code is write tests that exercise the interface. even if my tests are plainly useless (like the 'assert(type(foo.bar), String)` example) they at least force me to consider how the code is going to be used and require that i actually have thought about the preconditions necessary to use the interface. this also encourages writing nice clean implementations with a minimum of dependencies or side effects I'm firmly in the test after group, but I also write all my interfaces down on paper and then furiously draw arrows and scribble until it makes sense. I think the issue is that most people don't spend a couple minutes thinking through their design before they just start writing piles of garbage.
|
# ? Aug 29, 2016 13:13 |
|
We have "unit" tests, which I'm actively working on fixing up, and we also have regression tests that are completely separate from the main project and that run against the staging server's database. As in, Capybara tests that log into the QA server and interact with a live database. I asked them how they'd handle more than one person running the regression tests at a time, and they didn't have a good answer. At least one person on my team doesn't really understand when to be comprehensive in testing and when to just let things go. They're always really paranoid about things breaking and they manually pull ever single one of our PR branches and run through the full list of regression tests (manually!) to make sure nothing broke. On the one hand, I feel really bad for them because that's a massive waste of time, and on the other, I feel like they shouldn't be doing that at all and have misunderstood the point of testing, which is to make things go faster - not slower. They've got like 5 or 6 PRs still waiting around because of how long it takes for them to approve it. I can't bring this entire team up to speed on testing and decent developer practices on my own, but I sure as hell have to at least be pushing for it myself cause there's no guarantee anyone else will.
|
# ? Aug 29, 2016 13:37 |
|
"Test until fear turns to boredom."
|
# ? Aug 29, 2016 14:17 |
|
Riven posted:"Test until fear turns to boredom." "...but avoid ending up with fear and boredom."
|
# ? Aug 29, 2016 14:20 |
|
Antti posted:Well there's actually one test case where you need to physically print out a document and the lines need to be just right. Wouldn't that depend somewhat on the printer?
|
# ? Aug 29, 2016 15:41 |
|
Antti posted:Well there's actually one test case where you need to physically print out a document and the lines need to be just right. Run it through ghostscript and compare the output to a reference image
|
# ? Aug 29, 2016 16:17 |
|
Munkeymon posted:Wouldn't that depend somewhat on the printer? If it's that important, should you not regularly test the printer?
|
# ? Aug 29, 2016 16:18 |
|
Che Delilas posted:A good argument for this is that it is a part of the ticket. Because an implicit part of every ticket is, "... while preserving the rest of the expected behavior in the system." Tests are a great way to do that and as you said they make everything easier when you have to revisit that section of the code. Management has my back, thankfully. This has been coming from other team members. I'm hoping that once we have to loop back to a component and the tests save us some headaches they'll come around.
|
# ? Aug 29, 2016 16:25 |
|
leper khan posted:If it's that important, should you not regularly test the printer? Well yeah, but that's something you can do with a static image without doing a bunch of monkey work in some LOB monstrosity. Hell, Windows has a standard test page that's never more than ~5 clicks away.
|
# ? Aug 29, 2016 16:25 |
|
Sedro posted:Run it through ghostscript and compare the output to a reference image I will parrot this word for word and see what happens! I can't go into detail because it's such a highly specific thing that anyone working at the same place will recognize me from it.
|
# ? Aug 29, 2016 16:54 |
|
sarehu posted:What.
|
# ? Aug 29, 2016 16:55 |
|
Vulture Culture posted:Controversial opinion incoming: everyone should regularly be doing some amount of manual testing. As someone who QAed things where the devs insisted tests passed but the feature itself (or unrelated features you'd see along the way, like "logging in") was hilariously broken, I'm in agreement.
|
# ? Aug 29, 2016 17:13 |
|
What I like about testing: - When doing TDD and writing all the tests first, you fire up the application for the first time ever after like a week of development, and 90% of it works perfectly - Look at all those pretty checkmarks i must be doing something right What sucks about testing: - I'm 100% convinced that there isn't a good way to do tests against a real database - Architecting, writing, maintaining, and organizing tests - When an actual business requirement changes and you have to dig through all of your old tests and figure out which ones need to be modified MisterZimbu fucked around with this message at 19:55 on Aug 29, 2016 |
# ? Aug 29, 2016 19:16 |
|
Vulture Culture posted:Controversial opinion incoming: everyone should regularly be doing some amount of manual testing. This isn't controversial at all. There are some tests that can't be done by way of automation.
|
# ? Aug 29, 2016 19:41 |
|
|
# ? May 9, 2024 21:46 |
|
ratbert90 posted:This isn't controversial at all. There are some tests that can't be done by way of automation. Depends on the app. A simple, stand-alone RESTful microservice can be 100% automated testing and deployment.
|
# ? Aug 29, 2016 22:25 |