|
People who write and release bad software should be publicly made to feel bad. I don't see what's so hard to understand about this.
|
# ¿ Jan 25, 2013 17:53 |
|
|
# ¿ May 16, 2024 18:49 |
|
shrughes posted:Bullshit, it's not constructive. It's just somebody's little utility script. And they put it on github. You don't have to worry about that poo poo in utility scripts. The computer's not going to suddenly crash on you.
|
# ¿ Jan 25, 2013 20:55 |
|
Cocoa Crispies posted:I'm going to hand you a complete copy of all the code you've ever written and a revolver with a single bullet. We'll let you be responsible for any future bad code you proliferate.
|
# ¿ Jan 25, 2013 21:21 |
|
Everyone should be deeply ashamed of their code and make every attempt to conceal it. If you expose it to the open air, people will laugh and ridicule you, and deservedly so. Only a small amount of code that has ever been written is worth looking at, and the rest is highly corrosive and radioactive and should be kept in a lead container inside a concrete sarcophagus rather than on github where people might accidentally look at it.
|
# ¿ Jan 26, 2013 05:41 |
|
Jabor posted:So this may or may not count as a horror:
|
# ¿ Feb 2, 2013 18:29 |
|
Maybe instead of trying to be ultra clever you should just write even n = n `rem` 2 == 0 like a normal person.
|
# ¿ Mar 9, 2013 00:10 |
|
Suspicious Dish posted:I know how malloc and free work, I just didn't expect free to allow NULL as a special case, since to me it's an invalid input (you can't free a null pointer after all) and the C library typically just says "if you pass in an invalid input, it's your fault". But perhaps the utility of having unguarded frees is so convenient that they put it in there.
|
# ¿ Apr 30, 2013 05:07 |
|
The naive (and correct) solution is std::swap(a, b);
|
# ¿ May 7, 2013 01:32 |
|
seiken posted:I stumbled across this thread on another forum that's from a year and a half ago so it's kind of old, but it's too good not to share.
|
# ¿ May 7, 2013 13:22 |
|
When writing some code for doing MRI simulations I had a process for performing phase-encoding shots and counting them for the current image and cumulatively. Naturally, two accumulator variable names arose: cur_shot_count and cum_shot_count.
|
# ¿ May 8, 2013 12:16 |
|
SplitDestiny posted:Sometimes I like to search github...
|
# ¿ May 11, 2013 09:25 |
|
Suspicious Dish posted:You might think that's a joke, but lots of people advocate the enterprise approach.
|
# ¿ May 20, 2013 16:48 |
|
C++ code:
|
# ¿ May 20, 2013 17:21 |
|
That Turkey Story posted:I wanted to go and do it anyway:
|
# ¿ May 20, 2013 18:43 |
|
What have I doneC++ code:
Volte fucked around with this message at 07:46 on May 21, 2013 |
# ¿ May 21, 2013 07:38 |
|
Dren posted:Volte did you make all that up just for this or is it a transformation you adapted?
|
# ¿ May 21, 2013 14:57 |
|
Jabor posted:A single loop with no conditionals or divmods is kind of easy though:
|
# ¿ May 21, 2013 15:24 |
|
Here is my solution with no conditionals or divmod. printf("1 2 fizz 4 buzz 6 ...")
|
# ¿ May 21, 2013 15:24 |
|
I'm all for a nice belittling tirade. It's the actual writing that makes me want to punch that guy in the dick.
|
# ¿ May 31, 2013 20:03 |
|
Ithaqua posted:I find it pointless given that there are actual IDEs available because it's 2013.
|
# ¿ Jun 4, 2013 02:56 |
|
It's okay to use the mouse and arrow keys in Vim, it's not shameful despite what other people might try to tell you. You also get many other useful things like text objects, the f/t keys for jumping to a character on the line, 'cc' for replacing the current line, 'ciw' for erasing and changing the current word, 'ci(' for erasing the contents of the current set of parens, and other such things. Vim's good at navigation, but editing and transformation are its real strong suit. It takes a bit of time to learn but if you do a lot of coding you will become proficient before very long.
|
# ¿ Jun 4, 2013 19:00 |
|
I had a TA dock me because the comments in my printed-out code (the only way that we submitted assignments in that class) were not green
|
# ¿ Jul 26, 2013 03:50 |
|
Here is a picture of that assignment I found. It was so ridiculous I just had to take a photo (sorry it was from before I owned a phone with a decent camera). The assignment was to take an integer and print the word representation of it, which I did by recursively breaking the number up into three digit groups. The correct way was a massive case statement. When I confronted him about it he babbled something about not understanding my code and that he couldn't read the comments because they weren't green and so he couldn't tell where the comments stopped and the code began. I failed this assignment, other people got 100% even if their code didn't compile or make sense.
|
# ¿ Jul 26, 2013 04:17 |
|
pokeyman posted:Did your code work? I'm having a hard time reading the blurry code. I do like these marker's comments though:
|
# ¿ Jul 26, 2013 05:36 |
|
shrughes posted:Volte, with that break/continue usage and exit(1) I have to conclude that you have no sense of control flow. Otto Skorzeny posted:The part Volte left out is that after he showed his professor, the whole class had that assignment regraded and the prof got him an internship
|
# ¿ Jul 27, 2013 03:09 |
|
Rothon posted:It would be more appropriate to assert(3) or abort(3) there.
|
# ¿ Jul 27, 2013 18:25 |
|
That's a spicy meatball
|
# ¿ Jul 27, 2013 19:31 |
|
It wasn't that guy, it was someone from #cobol
|
# ¿ Jul 27, 2013 19:32 |
|
Self-documenting code can only ever cover the what, it can never cover the why. That said, I prefer a well written external document explaining the rationale behind the code to having to sift through tons of comments strewn everywhere (particularly massive headers that are bigger than the function itself). Plus when the documentation is external and tied to a particular version of the code, there's less of a mindfuck when you realize that the comments are out of date and don't spend 20 minutes trying to reconcile what the documentation says versus what the code actually does.
|
# ¿ Aug 14, 2013 03:58 |
|
Everything that has ever worked will continue to work, forever.
|
# ¿ Aug 26, 2013 20:42 |
|
Don't be too hard on the guy, he's just trying to make some $² so he can afford three meals² per day.
|
# ¿ May 22, 2015 14:49 |
|
Biowarfare posted:view source on that site, holy gently caress
|
# ¿ May 22, 2015 17:50 |
|
sarehu posted:null is seriously the most overrated "problem" ever. If you just pretend they don't exist, that a pointer in null state is just a trap representation that you can never read or compare to anything, i.e. that the nullness of a pointer is something you know statically, then you won't get any big costly problems in your software development (and certainly some non-nullable reference type being the default would be worse).
|
# ¿ Jul 6, 2015 12:09 |
|
1337JiveTurkey posted:Optional types feel half-assed when mixed with parametric polymorphism because while Foo[T] and Foo[Option[T]] may look closely related, they're nothing alike. Any situation where a method could return an Option[T], it should be equally valid to return a T rather than a Some[T]. So if T is covariant with Foo, then Foo[T] should be a subtype of Foo[Option[T]]. But if you want that in Scala for instance, then it needs to be Foo[Some[T]] which seems a tad ridiculous in terms of overhead when I want to use a Foo that always returns a T in a situation that calls for a Foo that may return a T without some sort of shim.
|
# ¿ Jul 7, 2015 20:04 |
|
Sinestro posted:This is from my phone while I wait to get pizza, but isn't that just an implicit conversion via join :: (Monad m) => m (m a) -> m a? I mean, I guess you could make an Option[T] that didn't follow monad laws, but I'm pretty sure you'd have to a ctively try.
|
# ¿ Jul 8, 2015 10:35 |
|
SupSuper posted:aaaaaaaaaaaaaaaaaaaaaaaaaaa
|
# ¿ Jul 12, 2015 00:00 |
|
Even if you're not using test-driven development I still feel like writing the test that you would have written if you hadn't yet written the implementation (or even had any concept of what it might look like internally) is the best option for robustness. If you actually want to test internal implementation details (like making sure one private method calls another), then I would barely even consider that a unit test. If there is an internal implementation detail that needs to be asserted as part of a unit test, then it should be part of the interface contract (either literally, or at least documented as such). It might make sense to test that the internal quicksort partition function is called log(n) times, but that's because a quicksort function that does not satisfy that property is not quicksort, even if it sorts the array. It does not make sense to test that your mocked model received exactly 1 call to .Validate() and 10 calls to .OnPropertyChanged() over the course of one unit test. Instead, I would test that saving an invalid model results in a validation error, and that observers receive property change notifications. I've heard arguments against using more than one "concrete" class per unit test, or that mixing multiple testable classes into one unit test is not actually unit testing, but more like integration testing, but I feel like that applies only if the dependencies are circular (i.e. don't test two services that depend on each other, within the same unit test -- break the loop using a mock/stub). If you have a Person model class along with a full set of PersonTests that verify it behaves correctly, then I have no problem using Person as a concrete class inside my ModelControllerTests instead of mocking a model class. As long as unit tests use only components that are themselves independently-unit tested (barring the component that is meant to be tested obviously), then it's still a unit test and only tests one thing as long as you don't stupidly write test code for your already-tested components. Yes, if you break the Person model then it may break your ModelControllerTests, but I question the value of being in the situation where some essential code is broken but some other code that depends on it directly is not. The PersonTests should be failing as well, so you will never be masking the source of a test failure. I say there is greater value in the test environment mirroring the real environment as closely as possible, meaning as few ad-hoc testing implementations of things as possible. Obviously that depends on the project size and development strategy (and exactly what the impact of a failing test is), but I don't mind 25% of the tests turning red if one of the core services breaks. It seems like that is kind of a realistic result actually, and you gotta fix that poo poo pronto or else the same number of sweeping faulty interactions will (or could) happen in production. The only time I can think of that I would substitute fake code for real code (i.e. mocks, stubs) is when the real code depends on external services or uncontrollable non-determinism. Even if you have to test a service that depends on random numbers, being able to supply a PRNG and seed to the service and then using a fixed value in the test is better than making a mock service that just returns a predefined list of numbers where randomness would otherwise go. Another option would be to design services that have modular implementations-- your service for downloading from the web could have a DownloaderProvider that, by default, uses some HttpDownloaderProvider that fetches things from the web, but then you could supply a (complete and, most importantly, independently unit-testable) implementation of DownloaderProvider called e.g., LocalDownloaderProvider that fetches things from a local store or generates them. Then the LocalDownloaderProvider is not really a mock but a concrete implementation of an interface provided by your application, rather than just transient/axiomatic test code that exists within your unit testing environment. And then the thing I said earlier about composing already-tested components within a unit test applies: you can use the LocalDownloaderProvider to test the interface of your downloader service without having to create any "fake" test-only code or rely on a mocking library to do the right thing.
|
# ¿ Aug 3, 2015 14:25 |
|
Bognar posted:Why would you willingly make debugging your tests harder? Digging through 25% of your tests to find which one actually broke seems ridiculous compared to knowing exactly which single test broke. edit: The other thing is that practically, a fundamental service breaking and causing everything else to break unexpectedly is probably not going to happen enough to make that possibility a fundamental pillar of your design. If you need to alter your fundamental service, then you need to run the tests for that service and make sure they don't break. If you do change a fundamental service and suddenly everything is broken, where's the mystery? Which part of that causes your tests to become hard to debug? Optimizing for the common case is popular in algorithm design, but it seems like optimizing for the worst case is the general modus operandi for software design. Why not assume the common case will hold (i.e., the uber-simple Person model does not magically break overnight) and make it robust enough that the worst case is not a complete catastrophe (i.e., you might have to spend 15 minutes looking through 100 test cases to find out which one is the truly broken one, although looking at the reasons for the secondary failures should also be enough to clue you in) Volte fucked around with this message at 14:53 on Aug 3, 2015 |
# ¿ Aug 3, 2015 14:45 |
|
Jabor posted:Except that takes forever since stepping through your "unit test" involves traversing thousands of lines of fairly unrelated components you decided it wasn't worth the effort to mock out. And using implementations explicitly may not be the best option -- something like dependency injection could be used at the unit test level. If you have a test that only holds if there is some valid Model object available and allow that object to be injected, then you have a way to construct an acyclic dependency graph. If ModelControllerTests requires a Model and we inject Person, it would be trivial to detect Person-related failures by running PersonTests before even running the ModelControllerTests and bailing if any are found, thus avoiding spurious test failures. If you think about unit tests as proofs of propositions (i.e. the proposition "this class has been tested" NOT "this class is verified correct") then you can get very general tests of the form: Tested(Foo), P1, P2 ⊢ Tested(Bar) or even ∀F⊂Foo(Tested(F), P1, P2 ⊢ Tested(Bar(F))) (in this case P1 and P2 might be something like "adding one should increase total by one" or whatever your normal unit tests would be, the proof of which is given by running the test and returning a passing result). A mock/stub Foo object is a valid proof of Tested(Foo) in this case, but if we already have perfectly reasonable real instances of Tested(Foo) lying around, then why introduce additional (in some cases deeply magical) instances of a potentially complex class into the test hierarchy? Volte fucked around with this message at 15:48 on Aug 3, 2015 |
# ¿ Aug 3, 2015 15:42 |
|
|
# ¿ May 16, 2024 18:49 |
|
Jabor posted:So what happens if the "failing assertion" is due to something like, for example, a callback you expected to get never happened? Are you going to manually dig into wherever the callback was supposed to be called and then walk backwards until you find the part that didn't work? What if you don't actually know anything about that particular component? I'm not saying mock objects aren't valuable in their place (whatever their place may be) but I reject the notion that treating components as an island for testing purposes is the only good way to do it. Dependency injection that knows how to associate a component with its test cases (or unit tests that are generic in the underlying implementation) seem like a fairly straightforward way to avoid strong coupling in the unit tests while still letting the components interact with each other more-or-less as they would in production. The more divergence between the testing environment and the production environment, the less effective the tests are going to be at rooting out production-time bugs. Hell, if you wanted to get fancy you could have it attempt a sequence of possible services to inject, choosing one whose tests succeed. If all you care about is "given a bunch of services that have been shown to be tested successfully..." then it doesn't actually matter what concrete implementation gets put in, as long as it behaves itself. And if it does matter, then that's an integration test.
|
# ¿ Aug 3, 2015 18:41 |