|
seiken posted:step 1: write a bunch of tests and make sure they all pass despite your code being completely and utterly broken. This way you know that the tests are reliably judging the correctness of the code. Yeah. Knowing whether the tests are broken or if the code is garbage is something I struggle with in a few cases.
|
# ? Dec 31, 2014 16:46 |
|
|
# ? May 15, 2024 07:48 |
|
HardDisk posted:Yeah. Knowing whether the tests are broken or if the code is garbage is something I struggle with in a few cases. Don't worry, it's TDD! The important thing is that the tests exist and they are passing. You can worry about correctness later
|
# ? Dec 31, 2014 16:59 |
|
seiken posted:step 1: write a bunch of tests and make sure they all pass despite your code being completely and utterly broken. This way you know that the tests are reliably judging the correctness of the code. no according to the nice consultant lady, the tests are supposed to all fail first! then you write the code that makes them turn green (or "pass" to use the technical term)
|
# ? Dec 31, 2014 17:21 |
|
lord funk posted:Can you all answer a dumb question for me: is TDD at all useful for catching UI issues, or is it only for model / framework testing? there's like a bunch of crazy people who are into somehow writing automated gui tests before the code is ready. there are totally people who claim to do that. i haven't worked with them though. automated gui testing is kind of bullshit - like selenium and all that poo poo is good but you're probably better off just hiring some dude to click buttons and be like "this logo is like 5 pixels off, or this icon doesn't look good." i've seen a lot of people end up with really brittle, expensive gui test suites that didn't find bugs. I have seen payoff with actually having a design, separating business logic from presentation layer, and having unit tests covering the business layer, but I haven't really encountered test automation of the presentation layer that really pays off more than having real people play with the product on different browsers/devices.
|
# ? Dec 31, 2014 17:33 |
|
I've worked on an automated GUI testing framework before, but it was mainly used for testing localized builds and required manual recording first. It was smart enough to not get stuck all the time on minor differences but I can't imagine doing TDD style tests for GUI.
|
# ? Dec 31, 2014 17:40 |
|
Basic smoke tests that verify that the UI actually opens and it as least minimally functional help prevent really dumb fuckups when you're pushing out a critical fix at 2 AM, but I've never seen anything more elaborate actually be useful at all, much less worth the effort of writing and maintaining them (and I have worked on things with fairly comprehensive UI tests).
|
# ? Dec 31, 2014 17:44 |
|
TheresaJayne posted:Could it be that the epoch Jan 1st 1970 was a thursday....
|
# ? Dec 31, 2014 18:46 |
|
Bruegels Fuckbooks posted:i've seen a lot of people end up with really brittle, expensive gui test suites that didn't find bugs. I've seen this several times. The tests are worthless because they're not reliable, and the effort to fix them is orders of magnitude greater than the amount of value they provide even when they're working properly. UI testing should really only be for basic smoke tests. Pretty much everything else can be unit and integration tested. The trick is to get the front-end developers writing tests, too. JavaScript is very testable. And of course, there's still tons of value in manual testing.
|
# ? Dec 31, 2014 20:12 |
|
pseudorandom name posted:The %G and %g formats to strftime() use the ISO 8601 week numbering year instead of the actual year as used by %Y and %y. date(2) posted:%G year of ISO week number (see %V); normally useful only with %V
|
# ? Dec 31, 2014 20:18 |
|
seiken posted:step 1: write a bunch of tests and make sure they all pass despite your code being completely and utterly broken. This way you know that the tests are reliably judging the correctness of the code. That's not how it works. You write a single test. The test fails. You make that test pass. Then you repeat. The functional correctness of your code is spelled out in your tests. As you build up your test suite, the tests give you a safety net you can use to refactor. This doesn't mean you won't miss test cases or introduce bugs, but it does mean that when you identify the bugs, it's trivial to add a new test case to catch it, and you can be confident that fixing that bug hasn't introduced regressions. It's not a panacea, but it definitely makes sense in some scenarios. The trick is to not be dogmatic about it. What you're describing is closer to code-first testing -- you write code, write a bunch of tests for that code (by running the code, taking the output, and assuming it's correct), and then you have a test suite full of test cases that validate that your bugs are present and start failing when you fix them. That's bad.
|
# ? Dec 31, 2014 20:24 |
|
Ithaqua posted:That's not how it works. You write a single test. The test fails. You make that test pass. Then you repeat. this is the exact opposite of what you described with "write test that 1+1=2, then just write code that returns 2 for now", but fair enough. No need to explain what testing is to me
|
# ? Dec 31, 2014 20:31 |
|
In my experience automated GUI tests usually get written when the automation group in a big organization just needs to tick a box signaling their involvement with some project. Manual smoke tests accomplish the same thing but better, as long as you have a process that ensures they never get skipped. It's really fun trying to explain to an automation guy that the tests they just wrote are going to be obsolete in a week because the designers just split your app's home screen into a multi-pane layout or moved a button to a new screen or something.
|
# ? Dec 31, 2014 20:41 |
|
seiken posted:this is the exact opposite of what you described with "write test that 1+1=2, then just write code that returns 2 for now", but fair enough. No need to explain what testing is to me It's not the opposite at all, it's a very simplified example of how to do "by-the-book" TDD. Test case 2: Adding two numbers returns the sum Input: 1, 1 Output: 2 In this case, the code can just return 2 -- I have no other requirements spelled out yet. Test case 2: Adding zero to a number returns the input Input: 1, 0 Output: 1 I could just choose the inputs 2, 0 and have the original code work, but the assumption is that I am capable of deciding what inputs for my test are sane and reasonable. I also tend to assume that if I write a new test case and it's already passing, my inputs are bad or it's a redundant test case. Again, this is something that requires thought and discretion.
|
# ? Dec 31, 2014 20:50 |
|
Sounds like a recipe for haphazard test cases and a shitload of wasted effort rewriting code each time you add a new test. Instead, write simpler chunks of code that are easily testable without some ridiculous iterative process
|
# ? Dec 31, 2014 21:22 |
|
Bruegels Fuckbooks posted:there's like a bunch of crazy people who are into somehow writing automated gui tests before the code is ready. there are totally people who claim to do that. i haven't worked with them though. The QA people in my department are exactly the 'automating GUI' type, and it is infuriating. Why you ask? Because our product is a damned API! That is right, we have a perfectly programmable interface with documentation. What does our QA do? Demand crazy UIs that are basically crappy versions of our integration tests, then hit THEM with a custom in-house GUI test tool. About 1/3 of the bugs that I get assigned to me end up being in the crappy UI test-harnesses or in the test tol itself.
|
# ? Dec 31, 2014 21:56 |
|
This is why I prefer quick check like property testing over individual cases Long story short tests constrain your solution space of code and its up to you to make sure that you aren't over (unlikely) or under constrained and everything is consistent
|
# ? Dec 31, 2014 22:00 |
|
seiken posted:step 1: write a bunch of tests and make sure they all pass despite your code being completely and utterly broken. This way you know that the tests are reliably judging the correctness of the code. If by "broken" you would mean ugly or inefficient or plain old retarded looking, then it's unironically one of the best things about TDD. Even if you can't code for poo poo and the code you first come up with is completely retarded, you can still be confident that it works, and keeps working while you refactor it into the most elegant and efficient code the world has ever seen. With some extra "manual" labor of writing tests in advance you're enormously simplifying the goal of writing the smartest and bestest code. It's much easier to start with the ugliest solution and refine it confidently while the tests have your back, rather than coming up with a brilliant solution from scratch. pigdog fucked around with this message at 22:02 on Dec 31, 2014 |
# ? Dec 31, 2014 22:00 |
|
TheFreshmanWIT posted:The QA people in my department are exactly the 'automating GUI' type, and it is infuriating. Why you ask? Because our product is a damned API! That is right, we have a perfectly programmable interface with documentation. What does our QA do? Demand crazy UIs that are basically crappy versions of our integration tests, then hit THEM with a custom in-house GUI test tool. About 1/3 of the bugs that I get assigned to me end up being in the crappy UI test-harnesses or in the test tol itself. Your QA team is the horror.
|
# ? Dec 31, 2014 22:03 |
|
Ithaqua posted:It's not the opposite at all, it's a very simplified example of how to do "by-the-book" TDD. This is horrible TDD and you're doing it wrong or at the very least, explaining the development part of Test Driven Development very, very poorly. This is one of those poo poo examples they use to teach TDD which you are supposed to quickly move on from and never use for real. You're not even describing code that meets the requirements of the test case. You are the cargo cult.
|
# ? Dec 31, 2014 22:27 |
|
pigdog posted:Even if you can't code for poo poo and the code you first come up with is completely retarded, you can still be confident that it works You can be confident that it passes your tests, and nothing more. Congratulations, you have successfully moved the point of failure from one chunk of code you wrote to another chunk of code you wrote!
|
# ? Dec 31, 2014 22:29 |
|
Huh. I was sure everyone would be in agreement about TDD. Quite the surprise here.
|
# ? Dec 31, 2014 22:29 |
|
Soricidus posted:You can be confident that it passes your tests, and nothing more. Congratulations, you have successfully moved the point of failure from one chunk of code you wrote to another chunk of code you wrote! code:
WHERE MY HAT IS AT fucked around with this message at 22:38 on Dec 31, 2014 |
# ? Dec 31, 2014 22:35 |
|
WHERE MY HAT IS AT posted:
|
# ? Dec 31, 2014 23:00 |
|
Obviously a better example would make for a better argument, but I'm sure the guy's still advocating using brainpower to find a general solution which can then be tested.
|
# ? Dec 31, 2014 23:02 |
|
_aaron posted:Well, yeah. You write more tests and modify sum() to ensure they pass. Then maybe you realize "hey, an if statement for every combination of inputs is loving stupid! there's probably a better way to do this..." You can refactor sum() to something cleaner/more efficient and be confident you didn't cause any regressions in your refactoring because you've got this whole suite of tests! Or maybe you choose not to implement your code only considering the narrow circumstances of one test at a time, and you realize from the get go that an if statement for every combination of inputs is loving stupid, and avoid wasting your time.
|
# ? Dec 31, 2014 23:06 |
|
I'm probably a horror to you guys in that we barely test anything at all, but most of my stuff has no interface and is server-side XML crunching/database stuff. Off the top of my head this is stuff I usually test for, and this is usually for OPC (other people's code): - Integer/double/decimal overflow - Signed/unsigned behavior - String overflow/string null - High bit character encoding - Date/time fuckups - Malformed XML - Line endianness Since 99.99% of the stuff is either going into or out of a database anyway, strong column typing/type sanity checks usually takes care of everything anyway. You gets your error (e.g. "integer cannot be cast to varchar") out of the log and fixes it. That said, on our last big WebDev project, we threw some hours at usertesting.com. That was pretty interesting; the service is more about general usability but we did find some obscure actual functional bugs out of it, like how pressing enter in the search box was firing a different event than clicking the search button on iOS x.y, so I could see the value of more standardized testing for interface-heavy projects.
|
# ? Dec 31, 2014 23:27 |
|
Steve French posted:Or maybe you choose not to implement your code only considering the narrow circumstances of one test at a time, and you realize from the get go that an if statement for every combination of inputs is loving stupid, and avoid wasting your time. This is the right way to do TDD. Write an initial test case, write some reasonable code that makes it pass and then write additional tests that fill out any corner cases you care about (not all corner cases, just the important ones). This leaves you with tests that define the important behaviors of the system and doesn't leave you writing bad code that doesn't help.
|
# ? Dec 31, 2014 23:32 |
|
Steve French posted:Or maybe you choose not to implement your code only considering the narrow circumstances of one test at a time, and you realize from the get go that an if statement for every combination of inputs is loving stupid, and avoid wasting your time. Exactly. Like, the only thing I can see people arguing against is the idea that you should deliberately write incorrect code at any point and expect this to improve the end result.
|
# ? Dec 31, 2014 23:34 |
|
I'm a big fan of property driven testing where it makes sense to do so. You can start to build up a nice library of reusable generators and tests, and rather than feeling like you have to think up all the test cases (none, one, some, lots, oops) a lot of them just come naturally out of the generators.
|
# ? Dec 31, 2014 23:35 |
|
Soricidus posted:You can be confident that it passes your tests, and nothing more. Congratulations, you have successfully moved the point of failure from one chunk of code you wrote to another chunk of code you wrote!
|
# ? Dec 31, 2014 23:40 |
|
Steve French posted:Or maybe you choose not to implement your code only considering the narrow circumstances of one test at a time, and you realize from the get go that an if statement for every combination of inputs is loving stupid, and avoid wasting your time. What if you're writing something that you don't fully understand when you sit down to write the method? Well, you know about cases A and B, so write tests for those, then write code to make them pass. Maybe in doing this, you think of another case C that you hadn't thought about before. Well, write a test case for it, and write code to make that test case pass. But wait, handling case C as an extension of what you've already written might be messy. So now you refactor. And you're confident that this new refactored code still works for the old A and B cases as well as the new C case that prompted the refactoring. Which part are you having trouble with here? The use of addition as an example, or the idea that you would ever write a method without a full understanding of all of it's edge and corner cases up front? If it's the former, then I agree; addition is not a great example here. If it's the latter, then we have had very fundamentally different experiences as software developers.
|
# ? Dec 31, 2014 23:55 |
|
_aaron posted:When you're writing a method that just adds two numbers, you're right. It's fairly simple, not many edge cases, etc. I would certainly not claim that I've always had a complete understanding without any mistakes up front; nobody's perfect. There's a difference, however, between having a full understanding as far as you're aware (with the knowledge that your awareness may not be complete), and knowing of something that you don't understand. Do I sit down to write code knowing about aspects of the problem that I haven't resolved? I'm sure, sometimes, but it's something I'd try to avoid. Would I advocate a development methodology that seems to actively encourage doing this? Definitely not. Regardless of whether TDD advocates this (nobody can seem to agree on what exactly it advocates anyway), I'm responding to the specific suggestions made in this thread. And sure, the analogy is not perfect, but I'm not the one who made it. To be more specific, I'm referring to the idea of intentionally writing and repeatedly revising an incomplete/incorrect implementation of a method to pass successive tests, rather than starting with a best effort of code and tests (regardless of which is written first), and then tweaking the code/tests and adding more tests as necessary to handle unforeseen edge cases. Steve French fucked around with this message at 00:07 on Jan 1, 2015 |
# ? Jan 1, 2015 00:01 |
|
Steve is right and here's a more real life failure of trying to use tdd to iterativey solve a problem. http://xprogramming.com/xpmag/OkSudoku
|
# ? Jan 1, 2015 00:16 |
|
ohgodwhat posted:Note we don't use Java at all. This guy also started off the phone screen with "Is this an interview? Oh no I got the wrong time completely, I don't understand time zones, can you call me back in an hour?" which would have been okay except for everything else. From pages and pages back, but: Candidate says he has 3 years experience with SQL. During the technical assessment: "Join? Oh, I haven't ever had to write one of those yet but I'd really like to!" Says he has 3 years ASP webforms experience: (fortunately we're fully MVC now) "User control. User control. User control. You mean user interface? Like a mouse?" Asked what his goals are: "I want to write elegant code." What do you mean, 'elegant code'? "You know, code that is elegant. It has elegance. Elegant. Code." Asked about source control: "Well, you have to have control of your source. Copyright is big." I wanted to hire him and just have him bang away at a Word doc all day that we'd tell him is Visual Studio, purely for comic relief.
|
# ? Jan 1, 2015 00:21 |
|
Janitor Prime posted:Steve is right and here's a more real life failure of trying to use tdd to iterativey solve a problem. http://xprogramming.com/xpmag/OkSudoku Yeah, and then there's the other side -- http://norvig.com/sudoku.html And the takeaway here: http://vladimirlevin.blogspot.com.au/2007/04/tdd-is-not-algorithm-generator.html It's why TDD doesn't work for beginners, because they need to have some concept of how they're going to solve the problem first. If you are literally programming by coincidence, TDD won't make it less of a coincidence. It's a tool that needs to be properly applied, just like any other method. Maluco Marinero fucked around with this message at 00:28 on Jan 1, 2015 |
# ? Jan 1, 2015 00:26 |
|
Steve French posted:To be more specific, I'm referring to the idea of intentionally writing and repeatedly revising an incomplete/incorrect implementation of a method to pass successive tests, rather than starting with a best effort of code and tests (regardless of which is written first), and then tweaking the code/tests and adding more tests as necessary to handle unforeseen edge cases. _aaron fucked around with this message at 01:05 on Jan 1, 2015 |
# ? Jan 1, 2015 00:57 |
|
User controls? You mean those bad things that make life bad?
|
# ? Jan 1, 2015 02:26 |
|
Steve French posted:To be more specific, I'm referring to the idea of intentionally writing and repeatedly revising an incomplete/incorrect implementation of a method to pass successive tests, rather than starting with a best effort of code and tests (regardless of which is written first), and then tweaking the code/tests and adding more tests as necessary to handle unforeseen edge cases. In my (admittedly relatively limited) experience, it generally only takes until the second or third test before you have to start actually building an algorithm instead of handling special cases. As in, the first test is kind of a base case ("when there is nothing, return 0"), and then from there, you build up functionality. Once you have the skeleton of an algorithm together, successive tests do tend to be for cases that haven't yet been accounted for. As I understand it, the danger of actually putting together a "complete" implementation of an algorithm is that you don't actually have confidence that 1) your tests adequately cover everything the code does, and 2) you can change the implementation without actually changing input/output functionality. While I appreciate where you're coming from (because yes, it does feel really stupid to put together a stupid algorithm just to pass tests when you know what the real algorithm is going to be like), I think it's important to make sure you aren't forgetting something when you write the algorithm. And, even though I've not heard it stated elsewhere in the TDD world, I think it also gives you confidence that your tests are actually testing what you think they should be testing. I've been in the situation where I've written tests for an algorithm that already exists, and I write the test, and it passes, and I'm not really sure whether or not it passed because the code works, or because I wrote the test incorrectly. (The solution, I know, is to comment out the code and see whether or not the test passes, but in those situations, I still end up being uncertain that I really know what's going on). Another part of TDD that I think is being missed in this discussion is that it's not just about having tests for code. It's also about using the tests to help guide the design and development of the code, and I mean that at a scale larger than one particular algorithm. What objects do you have and how do they interact with each other? Do they follow the SOLID principles (unless there's a good reason not to)? Are things too tightly coupled? How do things work at a higher level? While TDD isn't necessary to get good, clean code, it can help. A lot of people just don't seem to get through the initial state of "red, green, refactor" inside of a class, instead of really thinking about their architecture. I'm not confident that TDD does much good for math implementations; I feel like they're pretty similar to factories in that regard, with few branches/loops. But for business logic—especially when you have clear requirements on how things should work—then I think TDD is at its best stride. As for UI stuff, I think it's pointless to write tests for views. And I'm not yet convinced that UATs add much value. Write tests for clearly defined input/output in your models; not for passthroughs, wiring, factories, or "how good it looks".
|
# ? Jan 1, 2015 03:40 |
|
fleshweasel posted:User controls? You mean those bad things that make life bad? Yes. Though as I said, we've since shifted all new development to be done MVC, and are spending a fair amount of our time converting webforms stuff over to MVC. In the course of that migration I have come across many things that were horrible but one that is relevant would be a page where the same 10 controls were copy pasted a few dozen times with the only differences being the IDs had numbers appended and that each set after the first was set to be hidden. There was logic for populating and making them visible based on the results of non-parameterized SQL in the code behind. The icing on that cake was a single comment at the bottom of the code behind - "There's got to be a better way to do this."
|
# ? Jan 1, 2015 08:43 |
|
|
# ? May 15, 2024 07:48 |
|
Maluco Marinero posted:Yeah, and then there's the other side -- http://norvig.com/sudoku.html I agree with this sentiment, it took me a while to "get" TDD and i dont think i have it completely, The idea is good and if you work in a workplace that requires a minimum of 90% coverage then it saves a lot of time and effort trying to shoehorn coverage tests into the code afterwards. But as a 100% must do thing - I disagree. FFS why test the setters and getters? We know they work they have always worked and if the developer is suitably obtuse they can have some devastating effects on code for instance a first test for a getAmount() getter test i once saw someone do Java code:
|
# ? Jan 1, 2015 10:14 |