Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Space Kablooey
May 6, 2009


seiken posted:

step 1: write a bunch of tests and make sure they all pass despite your code being completely and utterly broken. This way you know that the tests are reliably judging the correctness of the code.

Yeah. Knowing whether the tests are broken or if the code is garbage is something I struggle with in a few cases.

Adbot
ADBOT LOVES YOU

seiken
Feb 7, 2005

hah ha ha

HardDisk posted:

Yeah. Knowing whether the tests are broken or if the code is garbage is something I struggle with in a few cases.

Don't worry, it's TDD! The important thing is that the tests exist and they are passing. You can worry about correctness later :rolleyes:

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

seiken posted:

step 1: write a bunch of tests and make sure they all pass despite your code being completely and utterly broken. This way you know that the tests are reliably judging the correctness of the code.

no according to the nice consultant lady, the tests are supposed to all fail first! then you write the code that makes them turn green (or "pass" to use the technical term)

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

lord funk posted:

Can you all answer a dumb question for me: is TDD at all useful for catching UI issues, or is it only for model / framework testing?

there's like a bunch of crazy people who are into somehow writing automated gui tests before the code is ready. there are totally people who claim to do that. i haven't worked with them though.

automated gui testing is kind of bullshit - like selenium and all that poo poo is good but you're probably better off just hiring some dude to click buttons and be like "this logo is like 5 pixels off, or this icon doesn't look good." i've seen a lot of people end up with really brittle, expensive gui test suites that didn't find bugs.

I have seen payoff with actually having a design, separating business logic from presentation layer, and having unit tests covering the business layer, but I haven't really encountered test automation of the presentation layer that really pays off more than having real people play with the product on different browsers/devices.

omeg
Sep 3, 2012

I've worked on an automated GUI testing framework before, but it was mainly used for testing localized builds and required manual recording first. It was smart enough to not get stuck all the time on minor differences but I can't imagine doing TDD style tests for GUI.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Basic smoke tests that verify that the UI actually opens and it as least minimally functional help prevent really dumb fuckups when you're pushing out a critical fix at 2 AM, but I've never seen anything more elaborate actually be useful at all, much less worth the effort of writing and maintaining them (and I have worked on things with fairly comprehensive UI tests).

Bonfire Lit
Jul 9, 2008

If you're one of the sinners who caused this please unfriend me now.

TheresaJayne posted:

Could it be that the epoch Jan 1st 1970 was a thursday....
That would show a rather remarkable feat of precognition on ISO's part, since ISO/R 2015 (where the ISO week was originally defined) is from 1971 and the UNIX epoch hadn't been 1970-01-01 until some two years later.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Bruegels Fuckbooks posted:

i've seen a lot of people end up with really brittle, expensive gui test suites that didn't find bugs.

I've seen this several times. The tests are worthless because they're not reliable, and the effort to fix them is orders of magnitude greater than the amount of value they provide even when they're working properly. UI testing should really only be for basic smoke tests. Pretty much everything else can be unit and integration tested. The trick is to get the front-end developers writing tests, too. JavaScript is very testable.

And of course, there's still tons of value in manual testing.

ExcessBLarg!
Sep 1, 2001

pseudorandom name posted:

The %G and %g formats to strftime() use the ISO 8601 week numbering year instead of the actual year as used by %Y and %y.
GNU libc's strftime(3) page has a reasonable explanation of "%G" that would caution anyone against using it in favor of "%Y", but coreutil's date(2) has a much better warning:

date(2) posted:

%G year of ISO week number (see %V); normally useful only with %V
Don't use "%V"? Don't use "%G"! Simple as that.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

seiken posted:

step 1: write a bunch of tests and make sure they all pass despite your code being completely and utterly broken. This way you know that the tests are reliably judging the correctness of the code.

That's not how it works. You write a single test. The test fails. You make that test pass. Then you repeat. The functional correctness of your code is spelled out in your tests. As you build up your test suite, the tests give you a safety net you can use to refactor. This doesn't mean you won't miss test cases or introduce bugs, but it does mean that when you identify the bugs, it's trivial to add a new test case to catch it, and you can be confident that fixing that bug hasn't introduced regressions.

It's not a panacea, but it definitely makes sense in some scenarios. The trick is to not be dogmatic about it.

What you're describing is closer to code-first testing -- you write code, write a bunch of tests for that code (by running the code, taking the output, and assuming it's correct), and then you have a test suite full of test cases that validate that your bugs are present and start failing when you fix them. That's bad.

seiken
Feb 7, 2005

hah ha ha

Ithaqua posted:

That's not how it works. You write a single test. The test fails. You make that test pass. Then you repeat.

this is the exact opposite of what you described with "write test that 1+1=2, then just write code that returns 2 for now", but fair enough. No need to explain what testing is to me

speng31b
May 8, 2010

In my experience automated GUI tests usually get written when the automation group in a big organization just needs to tick a box signaling their involvement with some project. Manual smoke tests accomplish the same thing but better, as long as you have a process that ensures they never get skipped.

It's really fun trying to explain to an automation guy that the tests they just wrote are going to be obsolete in a week because the designers just split your app's home screen into a multi-pane layout or moved a button to a new screen or something.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

seiken posted:

this is the exact opposite of what you described with "write test that 1+1=2, then just write code that returns 2 for now", but fair enough. No need to explain what testing is to me

It's not the opposite at all, it's a very simplified example of how to do "by-the-book" TDD.

Test case 2: Adding two numbers returns the sum
Input: 1, 1
Output: 2

In this case, the code can just return 2 -- I have no other requirements spelled out yet.

Test case 2: Adding zero to a number returns the input
Input: 1, 0
Output: 1

I could just choose the inputs 2, 0 and have the original code work, but the assumption is that I am capable of deciding what inputs for my test are sane and reasonable. I also tend to assume that if I write a new test case and it's already passing, my inputs are bad or it's a redundant test case. Again, this is something that requires thought and discretion.

Steve French
Sep 8, 2003

Sounds like a recipe for haphazard test cases and a shitload of wasted effort rewriting code each time you add a new test.

Instead, write simpler chunks of code that are easily testable without some ridiculous iterative process

TheFreshmanWIT
Feb 17, 2012

Bruegels Fuckbooks posted:

there's like a bunch of crazy people who are into somehow writing automated gui tests before the code is ready. there are totally people who claim to do that. i haven't worked with them though.

automated gui testing is kind of bullshit - like selenium and all that poo poo is good but you're probably better off just hiring some dude to click buttons and be like "this logo is like 5 pixels off, or this icon doesn't look good." i've seen a lot of people end up with really brittle, expensive gui test suites that didn't find bugs.

I have seen payoff with actually having a design, separating business logic from presentation layer, and having unit tests covering the business layer, but I haven't really encountered test automation of the presentation layer that really pays off more than having real people play with the product on different browsers/devices.

The QA people in my department are exactly the 'automating GUI' type, and it is infuriating. Why you ask? Because our product is a damned API! That is right, we have a perfectly programmable interface with documentation. What does our QA do? Demand crazy UIs that are basically crappy versions of our integration tests, then hit THEM with a custom in-house GUI test tool. About 1/3 of the bugs that I get assigned to me end up being in the crappy UI test-harnesses or in the test tol itself.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
This is why I prefer quick check like property testing over individual cases



Long story short tests constrain your solution space of code and its up to you to make sure that you aren't over (unlikely) or under constrained and everything is consistent

pigdog
Apr 23, 2004

by Smythe

seiken posted:

step 1: write a bunch of tests and make sure they all pass despite your code being completely and utterly broken. This way you know that the tests are reliably judging the correctness of the code.

If by "broken" you would mean ugly or inefficient or plain old retarded looking, then it's unironically one of the best things about TDD.

Even if you can't code for poo poo and the code you first come up with is completely retarded, you can still be confident that it works, and keeps working while you refactor it into the most elegant and efficient code the world has ever seen. With some extra "manual" labor of writing tests in advance you're enormously simplifying the goal of writing the smartest and bestest code. It's much easier to start with the ugliest solution and refine it confidently while the tests have your back, rather than coming up with a brilliant solution from scratch.

pigdog fucked around with this message at 22:02 on Dec 31, 2014

Evil_Greven
Feb 20, 2007

Whadda I got to,
whadda I got to do
to wake ya up?

To shake ya up,
to break the structure up!?

TheFreshmanWIT posted:

The QA people in my department are exactly the 'automating GUI' type, and it is infuriating. Why you ask? Because our product is a damned API! That is right, we have a perfectly programmable interface with documentation. What does our QA do? Demand crazy UIs that are basically crappy versions of our integration tests, then hit THEM with a custom in-house GUI test tool. About 1/3 of the bugs that I get assigned to me end up being in the crappy UI test-harnesses or in the test tol itself.

Your QA team is the horror.

Obsurveyor
Jan 10, 2003

Ithaqua posted:

It's not the opposite at all, it's a very simplified example of how to do "by-the-book" TDD.

Test case 2: Adding two numbers returns the sum
Input: 1, 1
Output: 2

In this case, the code can just return 2 -- I have no other requirements spelled out yet.

Test case 2: Adding zero to a number returns the input
Input: 1, 0
Output: 1

This is horrible TDD and you're doing it wrong or at the very least, explaining the development part of Test Driven Development very, very poorly. This is one of those poo poo examples they use to teach TDD which you are supposed to quickly move on from and never use for real. You're not even describing code that meets the requirements of the test case. You are the cargo cult.

Soricidus
Oct 21, 2010
freedom-hating statist shill

pigdog posted:

Even if you can't code for poo poo and the code you first come up with is completely retarded, you can still be confident that it works

You can be confident that it passes your tests, and nothing more. Congratulations, you have successfully moved the point of failure from one chunk of code you wrote to another chunk of code you wrote!

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Huh. I was sure everyone would be in agreement about TDD. Quite the surprise here.

WHERE MY HAT IS AT
Jan 7, 2011

Soricidus posted:

You can be confident that it passes your tests, and nothing more. Congratulations, you have successfully moved the point of failure from one chunk of code you wrote to another chunk of code you wrote!

code:
def sum(a, b):
    if (a == 1 and b == 1):
        return 2

    return 1
Correct code! No worries, I'll just add more ifs for different inputs as I write tests for them.

WHERE MY HAT IS AT fucked around with this message at 22:38 on Dec 31, 2014

_aaron
Jul 24, 2007
The underscore is silent.

WHERE MY HAT IS AT posted:

code:
def sum(a, b):
    if (a == 1 and b == 1):
        return 2

    return 1
Correct code! No worries, I'll just add more ifs for different inputs as I write tests for them.
Well, yeah. You write more tests and modify sum() to ensure they pass. Then maybe you realize "hey, an if statement for every combination of inputs is loving stupid! there's probably a better way to do this..." You can refactor sum() to something cleaner/more efficient and be confident you didn't cause any regressions in your refactoring because you've got this whole suite of tests!

brap
Aug 23, 2004

Grimey Drawer
Obviously a better example would make for a better argument, but I'm sure the guy's still advocating using brainpower to find a general solution which can then be tested.

Steve French
Sep 8, 2003

_aaron posted:

Well, yeah. You write more tests and modify sum() to ensure they pass. Then maybe you realize "hey, an if statement for every combination of inputs is loving stupid! there's probably a better way to do this..." You can refactor sum() to something cleaner/more efficient and be confident you didn't cause any regressions in your refactoring because you've got this whole suite of tests!

Or maybe you choose not to implement your code only considering the narrow circumstances of one test at a time, and you realize from the get go that an if statement for every combination of inputs is loving stupid, and avoid wasting your time.

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

I'm probably a horror to you guys in that we barely test anything at all, but most of my stuff has no interface and is server-side XML crunching/database stuff. Off the top of my head this is stuff I usually test for, and this is usually for OPC (other people's code):
- Integer/double/decimal overflow
- Signed/unsigned behavior
- String overflow/string null
- High bit character encoding
- Date/time fuckups
- Malformed XML
- Line endianness

Since 99.99% of the stuff is either going into or out of a database anyway, strong column typing/type sanity checks usually takes care of everything anyway. You gets your error (e.g. "integer cannot be cast to varchar") out of the log and fixes it.

That said, on our last big WebDev project, we threw some hours at usertesting.com. That was pretty interesting; the service is more about general usability but we did find some obscure actual functional bugs out of it, like how pressing enter in the search box was firing a different event than clicking the search button on iOS x.y, so I could see the value of more standardized testing for interface-heavy projects.

NovemberMike
Dec 28, 2008

Steve French posted:

Or maybe you choose not to implement your code only considering the narrow circumstances of one test at a time, and you realize from the get go that an if statement for every combination of inputs is loving stupid, and avoid wasting your time.

This is the right way to do TDD. Write an initial test case, write some reasonable code that makes it pass and then write additional tests that fill out any corner cases you care about (not all corner cases, just the important ones). This leaves you with tests that define the important behaviors of the system and doesn't leave you writing bad code that doesn't help.

Soricidus
Oct 21, 2010
freedom-hating statist shill

Steve French posted:

Or maybe you choose not to implement your code only considering the narrow circumstances of one test at a time, and you realize from the get go that an if statement for every combination of inputs is loving stupid, and avoid wasting your time.

Exactly.

Like, the only thing I can see people arguing against is the idea that you should deliberately write incorrect code at any point and expect this to improve the end result.

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
I'm a big fan of property driven testing where it makes sense to do so. You can start to build up a nice library of reusable generators and tests, and rather than feeling like you have to think up all the test cases (none, one, some, lots, oops) a lot of them just come naturally out of the generators.

pigdog
Apr 23, 2004

by Smythe

Soricidus posted:

You can be confident that it passes your tests, and nothing more. Congratulations, you have successfully moved the point of failure from one chunk of code you wrote to another chunk of code you wrote!
It's human to err so why even bother with programming :negative:

_aaron
Jul 24, 2007
The underscore is silent.

Steve French posted:

Or maybe you choose not to implement your code only considering the narrow circumstances of one test at a time, and you realize from the get go that an if statement for every combination of inputs is loving stupid, and avoid wasting your time.
When you're writing a method that just adds two numbers, you're right. It's fairly simple, not many edge cases, etc.

What if you're writing something that you don't fully understand when you sit down to write the method? Well, you know about cases A and B, so write tests for those, then write code to make them pass. Maybe in doing this, you think of another case C that you hadn't thought about before. Well, write a test case for it, and write code to make that test case pass. But wait, handling case C as an extension of what you've already written might be messy. So now you refactor. And you're confident that this new refactored code still works for the old A and B cases as well as the new C case that prompted the refactoring.

Which part are you having trouble with here? The use of addition as an example, or the idea that you would ever write a method without a full understanding of all of it's edge and corner cases up front? If it's the former, then I agree; addition is not a great example here. If it's the latter, then we have had very fundamentally different experiences as software developers.

Steve French
Sep 8, 2003

_aaron posted:

When you're writing a method that just adds two numbers, you're right. It's fairly simple, not many edge cases, etc.

What if you're writing something that you don't fully understand when you sit down to write the method? Well, you know about cases A and B, so write tests for those, then write code to make them pass. Maybe in doing this, you think of another case C that you hadn't thought about before. Well, write a test case for it, and write code to make that test case pass. But wait, handling case C as an extension of what you've already written might be messy. So now you refactor. And you're confident that this new refactored code still works for the old A and B cases as well as the new C case that prompted the refactoring.

Which part are you having trouble with here? The use of addition as an example, or the idea that you would ever write a method without a full understanding of all of it's edge and corner cases up front? If it's the former, then I agree; addition is not a great example here. If it's the latter, then we have had very fundamentally different experiences as software developers.

I would certainly not claim that I've always had a complete understanding without any mistakes up front; nobody's perfect. There's a difference, however, between having a full understanding as far as you're aware (with the knowledge that your awareness may not be complete), and knowing of something that you don't understand. Do I sit down to write code knowing about aspects of the problem that I haven't resolved? I'm sure, sometimes, but it's something I'd try to avoid. Would I advocate a development methodology that seems to actively encourage doing this? Definitely not.

Regardless of whether TDD advocates this (nobody can seem to agree on what exactly it advocates anyway), I'm responding to the specific suggestions made in this thread. And sure, the analogy is not perfect, but I'm not the one who made it.

To be more specific, I'm referring to the idea of intentionally writing and repeatedly revising an incomplete/incorrect implementation of a method to pass successive tests, rather than starting with a best effort of code and tests (regardless of which is written first), and then tweaking the code/tests and adding more tests as necessary to handle unforeseen edge cases.

Steve French fucked around with this message at 00:07 on Jan 1, 2015

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe
Steve is right and here's a more real life failure of trying to use tdd to iterativey solve a problem. http://xprogramming.com/xpmag/OkSudoku

metztli
Mar 19, 2006
Which lead to the obvious photoshop, making me suspect that their ad agencies or creative types must be aware of what goes on at SA

ohgodwhat posted:

Note we don't use Java at all. This guy also started off the phone screen with "Is this an interview? Oh no I got the wrong time completely, I don't understand time zones, can you call me back in an hour?" which would have been okay except for everything else.

I would love to hear about the bad interviews I'm sure some of you have had. :)

From pages and pages back, but:

Candidate says he has 3 years experience with SQL. During the technical assessment:
"Join? Oh, I haven't ever had to write one of those yet but I'd really like to!"

Says he has 3 years ASP webforms experience: (fortunately we're fully MVC now)
"User control. User control. User control. You mean user interface? Like a mouse?"

Asked what his goals are:
"I want to write elegant code."
What do you mean, 'elegant code'?
"You know, code that is elegant. It has elegance. Elegant. Code."

Asked about source control:
"Well, you have to have control of your source. Copyright is big."

I wanted to hire him and just have him bang away at a Word doc all day that we'd tell him is Visual Studio, purely for comic relief.

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.

Janitor Prime posted:

Steve is right and here's a more real life failure of trying to use tdd to iterativey solve a problem. http://xprogramming.com/xpmag/OkSudoku

Yeah, and then there's the other side -- http://norvig.com/sudoku.html
And the takeaway here: http://vladimirlevin.blogspot.com.au/2007/04/tdd-is-not-algorithm-generator.html

It's why TDD doesn't work for beginners, because they need to have some concept of how they're going to solve the problem first. If you are literally programming by coincidence, TDD won't make it less of a coincidence. It's a tool that needs to be properly applied, just like any other method.

Maluco Marinero fucked around with this message at 00:28 on Jan 1, 2015

_aaron
Jul 24, 2007
The underscore is silent.

Steve French posted:

To be more specific, I'm referring to the idea of intentionally writing and repeatedly revising an incomplete/incorrect implementation of a method to pass successive tests, rather than starting with a best effort of code and tests (regardless of which is written first), and then tweaking the code/tests and adding more tests as necessary to handle unforeseen edge cases.
Gotcha, I'm on board with this. I may have just misunderstood you earlier in the thread (or conflated a whole bunch of posters into one).

_aaron fucked around with this message at 01:05 on Jan 1, 2015

brap
Aug 23, 2004

Grimey Drawer

User controls? You mean those bad things that make life bad?

Axiem
Oct 19, 2005

I want to leave my mind blank, but I'm terrified of what will happen if I do

Steve French posted:

To be more specific, I'm referring to the idea of intentionally writing and repeatedly revising an incomplete/incorrect implementation of a method to pass successive tests, rather than starting with a best effort of code and tests (regardless of which is written first), and then tweaking the code/tests and adding more tests as necessary to handle unforeseen edge cases.

In my (admittedly relatively limited) experience, it generally only takes until the second or third test before you have to start actually building an algorithm instead of handling special cases. As in, the first test is kind of a base case ("when there is nothing, return 0"), and then from there, you build up functionality. Once you have the skeleton of an algorithm together, successive tests do tend to be for cases that haven't yet been accounted for.

As I understand it, the danger of actually putting together a "complete" implementation of an algorithm is that you don't actually have confidence that 1) your tests adequately cover everything the code does, and 2) you can change the implementation without actually changing input/output functionality. While I appreciate where you're coming from (because yes, it does feel really stupid to put together a stupid algorithm just to pass tests when you know what the real algorithm is going to be like), I think it's important to make sure you aren't forgetting something when you write the algorithm.

And, even though I've not heard it stated elsewhere in the TDD world, I think it also gives you confidence that your tests are actually testing what you think they should be testing. I've been in the situation where I've written tests for an algorithm that already exists, and I write the test, and it passes, and I'm not really sure whether or not it passed because the code works, or because I wrote the test incorrectly. (The solution, I know, is to comment out the code and see whether or not the test passes, but in those situations, I still end up being uncertain that I really know what's going on).

Another part of TDD that I think is being missed in this discussion is that it's not just about having tests for code. It's also about using the tests to help guide the design and development of the code, and I mean that at a scale larger than one particular algorithm. What objects do you have and how do they interact with each other? Do they follow the SOLID principles (unless there's a good reason not to)? Are things too tightly coupled? How do things work at a higher level?

While TDD isn't necessary to get good, clean code, it can help. A lot of people just don't seem to get through the initial state of "red, green, refactor" inside of a class, instead of really thinking about their architecture.

I'm not confident that TDD does much good for math implementations; I feel like they're pretty similar to factories in that regard, with few branches/loops. But for business logic—especially when you have clear requirements on how things should work—then I think TDD is at its best stride.

As for UI stuff, I think it's pointless to write tests for views. And I'm not yet convinced that UATs add much value. Write tests for clearly defined input/output in your models; not for passthroughs, wiring, factories, or "how good it looks".

metztli
Mar 19, 2006
Which lead to the obvious photoshop, making me suspect that their ad agencies or creative types must be aware of what goes on at SA

fleshweasel posted:

User controls? You mean those bad things that make life bad?

Yes. Though as I said, we've since shifted all new development to be done MVC, and are spending a fair amount of our time converting webforms stuff over to MVC.

In the course of that migration I have come across many things that were horrible but one that is relevant would be a page where the same 10 controls were copy pasted a few dozen times with the only differences being the IDs had numbers appended and that each set after the first was set to be hidden. There was logic for populating and making them visible based on the results of non-parameterized SQL in the code behind. The icing on that cake was a single comment at the bottom of the code behind - "There's got to be a better way to do this."

Adbot
ADBOT LOVES YOU

TheresaJayne
Jul 1, 2011

Maluco Marinero posted:

Yeah, and then there's the other side -- http://norvig.com/sudoku.html
And the takeaway here: http://vladimirlevin.blogspot.com.au/2007/04/tdd-is-not-algorithm-generator.html

It's why TDD doesn't work for beginners, because they need to have some concept of how they're going to solve the problem first. If you are literally programming by coincidence, TDD won't make it less of a coincidence. It's a tool that needs to be properly applied, just like any other method.

I agree with this sentiment, it took me a while to "get" TDD and i dont think i have it completely, The idea is good and if you work in a workplace that requires a minimum of 90% coverage then it saves a lot of time and effort trying to shoehorn coverage tests into the code afterwards.

But as a 100% must do thing - I disagree. FFS why test the setters and getters? We know they work they have always worked and if the developer is suitably obtuse they can have some devastating effects on code

for instance a first test for a getAmount() getter test i once saw someone do

Java code:

@Mock
Account account;

@Test
public void canTestGetAmount()
{
	when(account.getAmount).thenReturn(1.0f);

	float result = account.getAmount();
		
	assertThat(result,is(1.0f));
}

	

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply