|
ChickenWing posted:Funny, this is basically what my tech lead says to me every time I use reflection He is right. There are cases where reflection is the only way (or the most elegant way) of achieving a goal, but take care, the price you pay for it is quite high.
|
# ? Aug 19, 2016 14:37 |
|
|
# ? May 9, 2024 23:48 |
|
rt4 posted:In most software development, it seems to me that reflection would only be good for tapdancing around bad object design. I'd be really interested in reading anything that proves me wrong! Not a Java dev but I'm inclined to agree. I find a lot of the utility stuff is good for either debugging or building framework functionality, but is a bad idea to use in actual program code.
|
# ? Aug 19, 2016 14:40 |
Yeah, we designed ourselves into a corner a little bit in this case, but the end result isn't half bad and it's much more flexible than a number of other solutions we looked into.
|
|
# ? Aug 19, 2016 15:08 |
|
I end up having to use reflection when doing something with plugins. It looks like the right thing to do then. Is there a better general approach?
|
# ? Aug 19, 2016 15:13 |
|
Rocko Bonaparte posted:I end up having to use reflection when doing something with plugins. It looks like the right thing to do then. Is there a better general approach? In the .NET world, there's MEF. You define an interface for your plugin and MEF discovers plugin assemblies that match up to your interface. I'd be stunned if there wasn't something similar for Java. [edit] It looks like Java has OSGI
|
# ? Aug 19, 2016 15:44 |
|
OSGI is massive and it itself is using reflection too, although is very capable. Use it if you need its features. At the end of the day, for plugins you pretty much are forced to use reflection either via a library or custom code. Most likely .NET uses reflection too. Since java 1.6 one can use ServiceLoader for simple plugin support and maybe in Java 9 with the new module system there will be easier ways. But, dynamically loading a class at runtime, instantiating an object and calling methods from it, that most definitely is using reflection of some kind.
|
# ? Aug 19, 2016 16:35 |
|
Volguus posted:OSGI is massive and it itself is using reflection too, although is very capable. Use it if you need its features. I'm sure it does, but having something that nicely abstracts the process is good. I'd prefer not to roll my own half-assed plugin loading library.
|
# ? Aug 19, 2016 17:37 |
|
It's one thing to use reflection for that sort of thing, to dynamically load an instance of some class at runtime meant to implement some interface, and then use it as an instance of said interface throughout your code. It's another entirely to use reflection in your code to inspect an object of some unknown type to find fields/methods/etc and use them.
|
# ? Aug 19, 2016 18:35 |
|
Yeah okay. Plugins are fine, but cramming a square peg into a round hole in your application is a whole different thing. I've done some dynamic loading with Java a long time ago IIRC, but most of my stuff was .NET. I'm not sure if I ever actually even looked at MEF for that. I want to think I did and got killed by the knights who say, "NIH."
|
# ? Aug 19, 2016 19:16 |
|
rt4 posted:In most software development, it seems to me that reflection would only be good for tapdancing around bad object design. I'd be really interested in reading anything that proves me wrong! I saw a soft talk that said "clever is bad, its better to be obvious" and I realized how often I was trying to be clever and making a mess. Reflection is often times too clever. Sometimes its just so drat sexy though. Terribly hard to debug and find/replace though, and thats where some of the problems come in.
|
# ? Aug 19, 2016 19:32 |
For my capstone project in university, my prof had me go through the entire software development process (requirements, design, development, testing) in an incredibly basic elevator simulation using a process called Jackson System Development. One of the key ideas that the guy who made up the process expounded was that if you're being clever, you are almost always doing something wrong or compensating for bad design.
|
|
# ? Aug 19, 2016 19:40 |
|
Here's the talk https://www.youtube.com/watch?v=LdWMcs9EEOE&t=10583s
|
# ? Aug 19, 2016 19:42 |
|
ChickenWing posted:One of the key ideas that the guy who made up the process expounded was that if you're being clever, you are almost always doing something wrong or compensating for bad design. Users followed this pattern that required a consultant that could program to some degree onsite. It was dreadfully boring, repetitive, and way too verbose for how simple it was. code:
code:
code:
But yeah, writing a small compiler in Java and using reflection to determine types dynamically was not fun at all. I'm just glad it made a lot of people really happy they wouldn't have to touch a compiler again (few of those people should be allowed near a keyboard let alone javac).
|
# ? Aug 19, 2016 23:23 |
|
KoRMaK posted:Here's the talk Wow, just uploading the whole day in that room as a single 8.5 hour video. That's something.
|
# ? Aug 20, 2016 15:30 |
|
Nvm
WINNINGHARD fucked around with this message at 23:06 on Aug 20, 2016 |
# ? Aug 20, 2016 22:20 |
|
Does anyone else think these "no estimates" events/fads/movements are targeted towards the wrong people? All the developers, QAs and analysts get invites and the talks seem tailored to them trying to push that idea upwards. But the business, projects, stakeholders etc. don't give a flying gently caress. I've seen developers push #NoEstimates and got told to gently caress off and give a proper estimate in the nicest possible terms. But I work in a weird place whereby the business is supposed to treat us as a black box of requirements in -> software out, so in theory that poo poo should work because how we implement stuff is not their concern. Yet expectations and reality rarely get on.
|
# ? Aug 21, 2016 22:19 |
|
Cancelbot posted:Does anyone else think these "no estimates" events/fads/movements are targeted towards the wrong people? All the developers, QAs and analysts get invites and the talks seem tailored to them trying to push that idea upwards. But the business, projects, stakeholders etc. don't give a flying gently caress. I've seen developers push #NoEstimates and got told to gently caress off and give a proper estimate in the nicest possible terms. Agile would never have gotten anywhere if it was called #NoGanttCharts. Vulture Culture fucked around with this message at 22:35 on Aug 21, 2016 |
# ? Aug 21, 2016 22:32 |
|
Vulture Culture posted:The name pretty much guarantees it's preaching to the choir. The name comes across as a pissy "we're not gonna give you estimates, mom!" instead of "there are sometimes better ways to spend that time delivering features to users instead of using billable hours to tell you how many billable hours it will cost." Why would someone call it #NoEstimates? Just call it Rapid Development and sell $10MM in books extolling the virtues of it that no one will read or care about but Rapid sounds fast, so maybe it will bring things to market quickly and boy howdy does telling management you can bring things to market quickly go over well. Wait is Rapid Development not a thing? It sounds like a name someone else came up with and sold before I would possibly think of it.
|
# ? Aug 21, 2016 22:42 |
|
leper khan posted:Why would someone call it #NoEstimates? Just call it Rapid Development and sell $10MM in books extolling the virtues of it that no one will read or care about but Rapid sounds fast, so maybe it will bring things to market quickly and boy howdy does telling management you can bring things to market quickly go over well.
|
# ? Aug 21, 2016 22:43 |
|
Vulture Culture posted:The name pretty much guarantees it's preaching to the choir. The name comes across as a pissy "we're not gonna give you estimates, mom!" instead of "there are sometimes better ways to spend that time delivering features to users instead of using billable hours to tell you how many billable hours it will cost." Yeah. My business partner and I (so 2 person design & dev shop) would not be able to get a lick of business if we told clients 'gently caress you no estimates we don't know how much you're asking for will cost'. By the same token, we don't do quotes, we do estimates with fixed budgets, so we always push for flexible scope under that budget so we can keep quality relatively stable and leave the door open for dropping features. We also charge expressly for a half-day/day workshop seperately from the main development run so we get enough input as to what the features and budget should be. It's definitely not hardline agile, but hardline agile would be incompatible with most of our customer base and just mess things up for us with no gain really.
|
# ? Aug 21, 2016 22:58 |
|
If you're lucky they just put an analyst or coordinator on the team and have them track time against stories, stories to milestones/releases/whatever, and report on the movements of those. The business people get to keep looking at metrics and the developers get to keep developing. It's not like anyone's measuring lines of code per hour anymore (I really hope.) Regardless of your profession, and even if you're right, acting as though your work can't be measured or estimated comes across as arrogant at best.
|
# ? Aug 22, 2016 07:26 |
|
Cirofren posted:It's not like anyone's measuring lines of code per hour anymore (I really hope.) At work I had to come up with a set of goals that I'd try to work at. One of the suggestions was "lines of code per day." I just went when I read that because it's just so profoundly stupid. That and I'm negative on that metric because some of the first tasks I got at work were along the lines of "there is a poo poo load of bad, old code for bad, old features we don't use anymore. Spend a few days being a software janitor and yank that crap out."
|
# ? Aug 22, 2016 08:39 |
|
ToxicSlurpee posted:At work I had to come up with a set of goals that I'd try to work at. Just fill in the blank with -1; if you're above that you can always apologize later.
|
# ? Aug 22, 2016 11:51 |
|
ToxicSlurpee posted:At work I had to come up with a set of goals that I'd try to work at. It should be the absolute value of lines of code. How else can you add functionality by removing code?
|
# ? Aug 22, 2016 16:25 |
|
ToxicSlurpee posted:At work I had to come up with a set of goals that I'd try to work at.
|
# ? Aug 22, 2016 17:05 |
|
Vulture Culture posted:The refactors that facilitate the removal of dead code are usually the most important refactors. It's often harder on the repo than it is for the developer.
|
# ? Aug 22, 2016 17:22 |
|
Never forget that the code you bust your rear end writing today is the code that will be the legacy code of tomorrow. Everyone that's written Perl at odd hours of the night is intimately familiar with this reality.
|
# ? Aug 22, 2016 20:16 |
|
I like the way Michael Feathers puts it: anything that doesn't have unit tests is legacy code. Tests serve both as explanation of usage and confirm that changes haven't broken anything.
|
# ? Aug 22, 2016 20:18 |
|
rt4 posted:I like the way Michael Feathers puts it: anything that doesn't have unit tests is legacy code. That's a strange way of defining "legacy." Something can be extremely tested but still out of date with the current set of requirements.
|
# ? Aug 22, 2016 20:36 |
|
csammis posted:That's a strange way of defining "legacy." Something can be extremely tested but still out of date with the current set of requirements. Yeah, if the desired behaviour changes, the unit tests must themselves be updated, so the mere presence of unit tests can't really be taken as proof of anything. Unit tests can also cover code without meaningfully testing it, which is pretty much what's guaranteed to happen if developers are writing tests after the fact and aiming for a certain code coverage metric.
|
# ? Aug 22, 2016 20:42 |
|
PT6A posted:Unit tests can also cover code without meaningfully testing it, which is pretty much what's guaranteed to happen if developers are writing tests after the fact and aiming for a certain code coverage metric. One rule of thumb I like to use is that as the number of mocked objects and verify() calls increases, the less useful and more brittle the unit test.
|
# ? Aug 22, 2016 21:37 |
|
PT6A posted:Yeah, if the desired behaviour changes, the unit tests must themselves be updated, so the mere presence of unit tests can't really be taken as proof of anything. On the other hand writing tests after the fact without a target code coverage metric has provided more insightful tests than TDD in my experience.
|
# ? Aug 22, 2016 21:37 |
|
leper khan posted:On the other hand writing tests after the fact without a target code coverage metric has provided more insightful tests than TDD in my experience. Yeah, there are certain things I'd write tests for after the fact, but doing it to hit an arbitrary level of code coverage is not good. Coverage can be a useful tool, but only so you can see what's not covered and judge if it's something that actually should be covered (in which case there will be a reason for writing that test, and the test can be designed with a motivation other than "hit the target amount of code coverage").
|
# ? Aug 22, 2016 21:46 |
|
PT6A posted:Yeah, there are certain things I'd write tests for after the fact, but doing it to hit an arbitrary level of code coverage is not good. Coverage can be a useful tool, but only so you can see what's not covered and judge if it's something that actually should be covered (in which case there will be a reason for writing that test, and the test can be designed with a motivation other than "hit the target amount of code coverage"). This is something I fight with people about almost every week. They can't get it out of their heads that high code coverage = high quality code.
|
# ? Aug 22, 2016 22:56 |
|
necrobobsledder posted:Never forget that the code you bust your rear end writing today is the code that will be the legacy code of tomorrow. Everyone that's written Perl at odd hours of the night is intimately familiar with this reality. Ugh. This reminds me that everybody on LinkedIn vouches for my skill in Perl, so it's the highest ranked language in my LinkedIn skillset. The last time I wrote Perl was when I was home with a sinus infection sometime in 2010.
|
# ? Aug 22, 2016 23:01 |
|
Ithaqua posted:This is something I fight with people about almost every week. They can't get it out of their heads that high code coverage = high quality code. Also, if you have large amounts of code that isn't getting executed during a test suite despite tests being written to cover the desired behaviour of the program, you have to ask yourself if the code that's not being covered is necessary to have in the first place (the exception being error handling code, perhaps, but even that should be tested by reproducing the actual errors that could occur, not just mocking up an exception being thrown or something).
|
# ? Aug 22, 2016 23:11 |
|
Coverage is not equal to completeness. It's still very easy to have bugs sail right through unit tests.... This is why (at least in the web world) I prefer regression/integration testing. I'd rather test that an application does what it's supposed to, than test that an application's plumbing works. I guess I just like a more holistic approach. I recognize that a lot of applications in the finance, medical and engineering fields require a lot more rigorous testing but personally I don't want to be responsible for writing code when lives/money is on the line anyway. If you're spending real hours writing unit tests for a twitter clone then I pity you.
|
# ? Aug 22, 2016 23:23 |
|
Rocko Bonaparte posted:Ugh. This reminds me that everybody on LinkedIn vouches for my skill in Perl, so it's the highest ranked language in my LinkedIn skillset. The last time I wrote Perl was when I was home with a sinus infection sometime in 2010. You can manage that I think. Or at least prevent new ones from showing.
|
# ? Aug 23, 2016 00:21 |
|
Ithaqua posted:This is something I fight with people about almost every week. They can't get it out of their heads that high code coverage = high quality code. I do find that thinking about things in a BDD/TDD way helps me reason about keeping new interfaces a little saner, but I guess the key is that I've been at this long enough where I don't have to actually write the tests to figure that out.
|
# ? Aug 23, 2016 00:45 |
|
|
# ? May 9, 2024 23:48 |
|
leper khan posted:On the other hand writing tests after the fact without a target code coverage metric has provided more insightful tests than TDD in my experience. A very, very useful thing with tests is in automated testing; sometimes one change you make breaks something else but you basically just kind of can't plan for everything by yourself. So you call up your bro Jenkins, give him a ton of tests, and say "hey man let me know if anything is indirectly broken by this." If a test that has passed the past 300 times it was ran suddenly started failing that should get some attention so you can figure out why.
|
# ? Aug 23, 2016 00:51 |