|
qntm posted:
Macros and SQL: two great tastes that go great together. The 14 base arguments are a nice touch as well.
|
# ? Feb 9, 2016 17:14 |
|
|
# ? Jun 8, 2024 00:42 |
|
MisterZimbu posted:Isn't that better served by naming the test appropriately and/or leaving a comment saying "make sure we don't throw an exception"? Neiteher of those things are necessarily representative of what the test actually tests. You could have a test named MakeSureFooIsNotNull with a comment that says "Makes sure foo is 4" and Assert.AreEqual(null, Foo) in the test. You look at the assert and you know exactly what's being tested. If your test has Assert.DoesNotThrow(() => new Uat()); or something, you know definitively that the writer of the test was worried about exceptions in the constructor. Yeah, the other things help and bad comments and names should be fixed. But asserts are what define the test.
|
# ? Feb 9, 2016 17:59 |
|
More of a . https://github.com/jayphelps/git-blame-someone-else
|
# ? Feb 9, 2016 18:12 |
|
MisterZimbu posted:Isn't that better served by naming the test appropriately and/or leaving a comment saying "make sure we don't throw an exception"? Some test frameworks allow you to assert an exception was thrown.
|
# ? Feb 9, 2016 18:20 |
|
We can all agree the worst offense is when someone mixes up the argument order for junit assertions and you spend a sad amount of time debugging because of it.
|
# ? Feb 9, 2016 18:22 |
|
One of the examples is changing a commit in the official repo to be owned by Linus Torvalds. That's fantastic.
|
# ? Feb 9, 2016 18:24 |
FamDav posted:Some test frameworks allow you to assert an exception was thrown. I would imagine any language with unit tests and try/catch blocks allow you to assert an exception was thrown try { thing that will throw an exception() Assert.fail() } catch (exception) { }
|
|
# ? Feb 9, 2016 18:35 |
|
ChickenWing posted:I would imagine any language with unit tests and try/catch blocks allow you to assert an exception was thrown only reason you should have to write all that is if you need to look at the exception code:
|
# ? Feb 9, 2016 19:42 |
|
qntm posted:
no doubt all these variables' contents are carefully escaped somewhere else
|
# ? Feb 9, 2016 19:52 |
|
FamDav posted:only reason you should have to write all that is if you need to look at the exception That's not the only reason. With the decorator approach you can't tell the difference between an exception that happens in your test code and an exception that happens in the code under test. That's irrelevant if the exception type you're looking for is sufficiently specific, but in many languages the same exception types are reused all over the place, so it's not that far-fetched. Anyone who thinks this will only happen if the test setup code is too complicated and does stuff it shouldn't do probably hasn't tried to write unit tests for code that wasn't designed with unit testing in mind. I've actually had multiple false negatives because of this with the [ExpectedException] attribute in MSTest in .NET. Also you obviously aren't going to write all that stuff every time. You just define a helper function that takes a call to the function as input (if you aren't using a test framework that already has one), so the call is something like Assert.Throws<ExceptionType>(() => MethodThatShouldThrow()).
|
# ? Feb 9, 2016 22:21 |
|
qntm posted:
Motherfuc
|
# ? Feb 9, 2016 23:04 |
|
This is, of course, from the CGI script which serves the web UI front-end of our test results database. Elsewhere there are pieces of code which laboriously assemble snippets of HTML using C strings. There are probably worse horrors in this codebase, I think it might be the worst code I've ever seen while working at this company, but I don't understand C very well so it's hard to say. It's the work of a single person who was allowed to maintain this project single-handedly with minimal oversight for far too long.
|
# ? Feb 9, 2016 23:31 |
|
ninjeff posted:Really? That just makes me think that the writer thinks constructors can return null. Some can new (std::nothrow) X();
|
# ? Feb 9, 2016 23:41 |
|
Series DD Funding posted:Some can c++ is cheating
|
# ? Feb 9, 2016 23:43 |
|
Wouldn't it, if you REALLY wanted to check constructor failure, be more direct to simply:code:
|
# ? Feb 10, 2016 00:15 |
|
But now you are swallowing the exception, which you could use to investigate why it failed. If there's any place that you want to see all the details that you program spits out in case of an exception, one of best are the tests. Space Kablooey fucked around with this message at 00:25 on Feb 10, 2016 |
# ? Feb 10, 2016 00:18 |
|
Double post
|
# ? Feb 10, 2016 00:24 |
|
I suppose you could do something like:code:
Linear Zoetrope fucked around with this message at 01:26 on Feb 10, 2016 |
# ? Feb 10, 2016 01:23 |
|
Unless you have intermittent failures that only happen on your CI server, unit tests are pretty much the single least important place to log why something failed because you can just rerun the test with a debugger attached and find out.
|
# ? Feb 10, 2016 02:33 |
|
Jsor posted:But it raises the question of why do this when you can just use a comment? Again, because you can't trust comments. Asserts are definitive descriptions of expected behavior, while a given comment may or may not have anything to do with what's going on in the code.
|
# ? Feb 10, 2016 02:39 |
|
Che Delilas posted:Again, because you can't trust comments. Asserts are definitive descriptions of expected behavior, while a given comment may or may not have anything to do with what's going on in the code. You need both. I encountered a unit test that asserted that the return value of a specific function was just some random integer, and there were no comments. It was completely baffling why they expected the particular output for the input they passed in, and it turned out that the integer was no longer the correct expected output, they just forgot to update that particular test when the specification changed. A constant also would have been better, but at least a comment like "make sure output is the interest owed" or something would have let me know that this unit test incorrectly wanted the interest only and not the interest+principal, which was the actual correct output.
|
# ? Feb 10, 2016 03:35 |
|
If you reach the point that you cannot trust comments to be even vaguely informative, then your code has Problems.
|
# ? Feb 10, 2016 04:29 |
|
I wrote some tests today that boil down to "try plugging the fuzzer in and simulating through the resulting output to ensure that these robots don't break expensive things irl" where the passing case is just that nothing freaked out and exploded assertions everywhere (given that the stuff that, you know, tests our ability to test for things going wrong passes). Reading this thread made me turn my brain on, consider that those might not quite fall under the category of unit tests, and think about splitting them off and putting them somewhere else. Thanks for making me less poo poo, thread!
|
# ? Feb 10, 2016 05:18 |
|
This seems as good a time as any to share this: https://github.com/rspec/rspec-expectations/issues/655
|
# ? Feb 10, 2016 05:26 |
|
Cuntpunch posted:Wouldn't it, if you REALLY wanted to check constructor failure, be more direct to simply: My hypothetical was testing that the constructor didn't throw an exception.
|
# ? Feb 10, 2016 15:11 |
|
Oh, huh. I've had a crude version of that bound to git shitcommit for years, now. Occasionally comes in useful.
|
# ? Feb 10, 2016 15:20 |
|
I'm trying to say this:Python code:
code:
Space Kablooey fucked around with this message at 17:27 on Feb 10, 2016 |
# ? Feb 10, 2016 15:38 |
|
Plorkyeran posted:Unless you have intermittent failures that only happen on your CI server, unit tests are pretty much the single least important place to log why something failed because you can just rerun the test with a debugger attached and find out. If you don't have intermittent failures, you probably don't have enough tests.
|
# ? Feb 10, 2016 16:17 |
|
Subjunctive posted:If you don't have intermittent failures, you probably don't have enough tests. That is kinda true, but they generally should be in your integration tests and not in your unit tests.
|
# ? Feb 10, 2016 17:00 |
|
Plorkyeran posted:That is kinda true, but they generally should be in your integration tests and not in your unit tests. Or you've discovered a race condition in your code, have fun finding it.
|
# ? Feb 10, 2016 17:26 |
|
A test which is capable of having a race condition pretty much by definition isn't a unit test unless the unit under test is your thread synchronization functionality or something. I'm not just nitpicking here; how you go about testing that a function throws an exception when given a specific invalid input and how you test a thing that spins up a bunch of threads that do some work and communicate between each other should be very different, and the amount and type of information that needs to be reported on failure is similarly different. In the latter case I'd still generally argue that it's often better to have the actual functionality being tested log enough information to debug failures without much in the way of reporting from the test itself as that also helps you debug production problems, but that's clearly much more situational (e.g. it assumes that you have production logs).
|
# ? Feb 10, 2016 17:40 |
|
You can have small multi-threaded functions be units, like, say, some parallel sort ruitine. Or big ones. I would consider a multi-threaded storage engine to be a unit of a larger application, and your unit test could mock the locking system or disk in order to re-run operations in a whole bunch of different orders.
|
# ? Feb 10, 2016 22:24 |
|
We had a great unit test a couple years back that had an "acceptable variance" for passing. It was basically a randomized weighted sort that you could pretend was deterministic with a 5-10% variance on the output. I enjoyed committing proper tests for that masterpiece.
|
# ? Feb 11, 2016 07:51 |
|
I have some stuff like that in my most recent product -- the code in question is some realtime stuff that can have configurable timeouts or be cancelled from another thread. Quite a few unit tests take the form of "set up interesting condition, and look for it to exit in 5.5 seconds plus or minus 0.5 seconds" or "start a big workload, then hit the cancel button, and expect it to return with the partially completed half-accurate answer within 50ms." It's actually pretty nice. Sadly, it gets disabled for daily builds because those are done on VMs on a massively overloaded host and you're lucky if you get a context switch once a second. The honest truth is that there's a third class somewhere between unit tests and e2e integration tests that isn't handled well by most modern testing frameworks.
|
# ? Feb 11, 2016 08:07 |
|
Subjunctive posted:If you don't have intermittent failures, you probably don't have enough tests. ok we don't have intermittent fails (now) and we have 15000+ unit tests, should we add more?
|
# ? Feb 11, 2016 09:17 |
|
TheresaJayne posted:ok we don't have intermittent fails (now) and we have 15000+ unit tests, should we add more? 50000 tests
|
# ? Feb 11, 2016 17:28 |
|
necrotic posted:We had a great unit test a couple years back that had an "acceptable variance" for passing. It was basically a randomized weighted sort that you could pretend was deterministic with a 5-10% variance on the output. I enjoyed committing proper tests for that masterpiece. I've come across tests that tried to calibrate on system load by running a known bit of work and then scaling the timeouts accordingly. I enjoyed setting that on fire. TheresaJayne posted:ok we don't have intermittent fails (now) and we have 15000+ unit tests, should we add more? It's a good start! Mostly I guess it's rate-of-test-addition that determines flakiness, maybe? Six digits is where the real work starts. E: actually, as I think more about it, flakiness has tended to emerge when tests start to be run in more different configurations (OS, number of cores, device type, etc.) Subjunctive fucked around with this message at 18:11 on Feb 11, 2016 |
# ? Feb 11, 2016 18:05 |
|
We have about 9000 JS unit tests and the only intermittent failures from those are when PhantomJS crashes randomly. Our e2e tests are a different story.
|
# ? Feb 11, 2016 18:12 |
|
we do not have unit tests. instead we have "test plan runners", which are dedicated QA staff that just spend all their time running through workflow scripts in the integration testing environment and reporting things they find that are broken. it works about as well as you would think.
|
# ? Feb 11, 2016 19:27 |
|
|
# ? Jun 8, 2024 00:42 |
|
The QA staff at my old job didnt trust our unit tests (and retested the features themselves) so we just stopped writing them
|
# ? Feb 11, 2016 19:56 |