|
Good DI makes it easy to avoid using magical hidden machinery to construct instances (static global variables in the simple case), which happens to be the exact opposite of the way DI containers tend to work. For one thing, I don't believe that DI should involve constructing instances at all. Reflexively distinguishing "constructor injection" from "construction" shows how deeply conflated the two concepts are. Put simply, if each request for a database connection (i.e. each instance of your class that has a database connection as a dependency) is supposed to create a new connection instance, then your dependency is not in fact a database connection at all, but rather a database connection factory (also known as a function). Something like this (in pseudo-TypeScript):code:
The problem remains of how to construct root-level objects with dependencies in the first place. My general rule of thumb is that if ThingA is capable of instantiating ThingB via ThingBFactory, then ThingA should itself have a corresponding ThingAFactory than injects the ThingBFactory into it during creation. And If ThingB can instantiate ThingC via ThingCFactory, then that ThingCFactory should be injected into ThingB by ThingBFactory. That creates an implicit dependency chain from ThingC to ThingA, without ThingA and ThingC having to actually know about each other or, more crucially, about each others' dependencies (which is a major issue with the "just pass all the dependencies down from the root" pattern). Root instances of things (i.e. RootViewController) can be created by invoking the corresponding factory from the application entry point. Now the dependency chain is set up, everything is decoupled, and best of all, there's no weird magic poo poo going on behind-the-scenes. Depending on your language and how much infrastructure you want to put behind it, I'd argue there's not even that much more code involved in doing it "manually".
|
# ? Mar 17, 2017 21:22 |
|
|
# ? Jun 7, 2024 17:54 |
|
So DI is basically reinventing the Reader monad? I never understood it before.
|
# ? Mar 17, 2017 21:28 |
|
Sinestro posted:So DI is basically reinventing the Reader monad? I never understood it before. My ten-second reading of the Reader monad suggests that it's one way of achieving DI, viz. you have an environment that you set up, and then parts of your code retrieve their dependencies from that environment. But that's certainly not the only way to do DI. For example, you might have an input parser where you examine the type of the input you've received, and select a function (e.g. parseNum, parseText, parseImage, etc.) that can handle that kind of input, then you pass the function to some unmarshalling code that actually calls it. The unmarshalling code has had its dependency injected.
|
# ? Mar 17, 2017 21:58 |
|
Sinestro posted:So DI is basically reinventing the Reader monad? I never understood it before. code:
code:
edit: I guess that's not technically a combinator since the execDatabaseQuery function itself is a free variable but let's not pick nits here. Volte fucked around with this message at 22:22 on Mar 17, 2017 |
# ? Mar 17, 2017 22:14 |
|
TooMuchAbstraction posted:Dependency injection is what we're talking about, right? It's loving amazing. You use it to make your code testable. Instead of having a hardcoded reference to the database, your query code takes the DB connection as a parameter, and then your unit test code can create a mock or fake DB and send that in instead. Ditto with network calls, references to the filesystem, etc. Your unit tests can do everything in memory, which means they run faster and more reliably, which means you run them more often, which means they're more useful. This is just using method parameters right? Or am I missing something? The real magic seems to be the mocking framework that mocks a complex object ( like a DB connection ). Edit: well poo poo, something finally clicked for me as well. So that is all it amounts to. Thanks I guess, must have been all the posts about DI in combination with IOC that tripped me up as well. Mr Shiny Pants fucked around with this message at 23:02 on Mar 17, 2017 |
# ? Mar 17, 2017 22:50 |
|
Hammerite posted:Well, that's not hard to understand at all, in fact it's extremely simple. But generally when I see an explanation of dependency injection it's explained in abstract terms and crucially, the discussion is bundled up with a whole lot of other terms, like "inversion of control" and "SOLID" that I might be able to understand if they were introduced individually but which are too much to take in when all introduced at once and defined abstractly. I'm best equipped to understand concepts when they're expressed in terms of a prototype - a motivating example - rather than an abstract definition. It doesn't help that I tend to remember dependency injection as meaning the kind of big, complex frameworks that other posters are talking about (as being distinct from dependency injection), because that's what I've seen talked about in discussions at work. Injection of unexplained jargon like that usually just means that the explainer doesn't have a solid enough grasp on the topic
|
# ? Mar 17, 2017 23:02 |
|
Mr Shiny Pants posted:This is just using method parameters right? Or am I missing something? The real magic seems to be the mocking framework that mocks a complex object ( like a DB connection ). Methods or interfaces, yeah. Really, lots and lots of design patterns are not magical; they're straightforward applications of standard objects. I tend to be distrustful of magic in general, because anything that's magic is implicitly something I don't understand.
|
# ? Mar 17, 2017 23:04 |
|
Magic is all too often used for beginner tutorials and it makes it hard for some people to understand whats happening. Not exactly the same thing, but I mentioned in one of these threads recently about how I didn't understand javascript promises until I sat down and created a basic implementation myself. The Promise API isn't exactly magic, but it hid enough that I didn't get it. It seems like there's some set of circumstances that I don't quite understand where magic is good for devs who understand what is being magic-ed away because it makes their lives easier.
|
# ? Mar 17, 2017 23:15 |
|
Volguus posted:My way of looking at it is as follows: "In order to do my job I need an instance of X. Someone give that to me." Frankly I hate "magic" things that just happen. It obfuscates what's actually going on and makes it much harder for someone unfamiliar with that type of magic or a particular code-base to actually understand what's happening for very little actual gain. You've saved yourself having to pass in a constructor parameter? The worst example I've run into of "magic" behavior are assembly extensions in .Net. I can't loving stand them. You include a reference to an assembly in your project, use it in your code and try to use methods you know should be available for a particular class and they're just not there. The exact class you wanted is there, but methods or properties you expect to be there aren't. If you haven't used that assembly before (or in a while) there's a good chance you don't know or forgot that you need to also include the extension for that assembly to get that functionality. If you're lucky a quick google or look at the docs will point out what's missing or maybe your IDE will. If not, have fun wasting tons of time the first few times you run into this. This type of poo poo should be handled explicitly by separate classes, not classes that sometimes have one definition and sometimes another. Been working with .Net Core a lot lately and the first day or two was a nightmare of this type of bullshit because .Net Core is also a huge cluster-gently caress of horribly confusing versioning, incomplete/outdated documentation, worthless loving search on the docs site and just bad documentation in general. You can search for an exact method name or Class.method() and find zero results if you search their docs directly. However, if you now where to look in the table of contents you can probably find what you were looking for. But just to gently caress with you, their examples seem to be mostly partial examples that exclude the using statements and frequently don't say jack-poo poo about what assemblies are required like standard .Net documentation does. I wasted hours trying to figure out the proper way to initialize SSL with Kestrel before I finally figured out that A) the documentation was for a different older version of .Net Core, B) the samples in Github were actually based on 2.0alpha where stuff has changed a lot C) the loving official scaffold I used to create my project in VS Code didn't actually include Kestrel's SSL extension assembly for some idiotic reason and D) Because it was an extension I was seeing all the classes I expected to but not the particular methods I needed. In case you can't tell I'm a little bitter about the whole thing, though mostly at this point it's the versioning that still pisses me off. The SDK/tooling versions are separate from the framework version but close enough to ensure confusion and their naming of the framework versions and their related installers seem to be designed by people intentionally trolling developers. Then there's the fun bullshit about them starting with a new project.json format in 1.0 and deciding to change that mid-stream to a new .csproj format that is similar to old .csproj files but not the same. The CLI has an option to automatically migrate for you but if you added any dependencies with the version set to "*" (latest) it just crashes with cryptic error messages. Hell, if you try to compile a project.json project with the newer CLI tools it gives a cryptic useless error and fails despite the fact that it's a super easy check for an obvious issue that should just spit out a friendly error about needing to migrate the project file and that "dotnet migrate" will do it for you. It feels like whoever is at the reigns is either an extremely pedantic hyper-nerd or just wants the whole thing to fail. Here's an article about the version issues with .Net Core, this actually just made me angrier: http://blog.tpcware.com/2016/12/multiple-versions-of-net-core-runtimes-and-sdk-tools-sxs-survive-guide/ To top it off, dotnet --version will return the version of the CLI/SDK but not the runtime. I looked but haven't found a method to show the current version of the runtime that's installed/default or list all runtime versions installed. The only method I know of is to go look in the .Net Core install folder and look under shared/Microsoft.NETCore.App. Hell, even the latest installer for the SDK is named dotnet-1.1.1-win-x64.exe while the page (https://www.microsoft.com/net/download/core#/sdk) but running dotnet --version spits out 1.0.1. It was worse a few weeks ago when I initially started working with it when the downloads were named poo poo like: dotnet-dev-win-x64.1.0.0-preview2-1-003177.exe (actually dotnet core 1.1) .Net Core 1.1 also apparently came with older CLI tools than .Net Core 1.0.2/3 and the main difference was the latest minor version of .Net Core had older CLI that supported project.json projects while LTS versions of 1.0.x came with newer CLI tools that didn't support project.json. I think the only reason I even bothered getting it all to work was stubbornness.
|
# ? Mar 17, 2017 23:48 |
|
RandomBlue posted:Frankly I hate "magic" things that just happen. It obfuscates what's actually going on and makes it much harder for someone unfamiliar with that type of magic or a particular code-base to actually understand what's happening for very little actual gain. You've saved yourself having to pass in a constructor parameter? It shouldn't obfuscate anything in a properly designed library. But I agree about .Net. I had the "pleasure" to work with it in the last 6 months or so. Coming from C++ and Java, C# looks like a better Java. It is a better language no question about it. But (and here is when the fun comes in), everything in the .Net ecosystem, the tools, the libraries, the frameworks, look like an undernourished child from Somalia with Down syndrome that was just rescued by a compassionate person when compared with your normal, healthy and properly fed (almost obese) Western boy (Java). It's not even funny. Why do developers subject themselves to that abuse? C#'s superiority over Java at the language level is not maintained everywhere else. Not to mention Microsoft's almost comical tendency to make everything 100 times more complicated than it needs to be, then turning around and instead of fixing the old library, just releasing a brand new one with different (but just as bad) problems than the one before.
|
# ? Mar 18, 2017 00:02 |
|
I mean, on the flipside stuff like serialization libraries can be pretty drat magical sometimes, and I don't really feel the need to understand how they work (blah blah reflection blah blah). I think the difference is that serialization libraries have a very tightly-defined domain, while other kinds of magic tend to reach into fundamental aspects of your program and make your code not really look like how code for that language usually looks.
|
# ? Mar 18, 2017 00:03 |
|
RandomBlue posted:Frankly I hate "magic" things that just happen. It obfuscates what's actually going on and makes it much harder for someone unfamiliar with that type of magic or a particular code-base to actually understand what's happening for very little actual gain. You've saved yourself having to pass in a constructor parameter? It means that you don't need to track everywhere that each dependency is used. And it's not really magic. It's just a matter of making an object that holds all of your dependencies and passing that around, rather than passing each dependency around individually. Yes, you're essentially saving having to pass a constructor parameter, similar to using a struct to avoid having to pass a bunch of variables to a function. You're also mitigating the issues that DI can create. If, for example, some class suddenly needs a new dependency, you don't need to change its constructor signature and also everywhere that it gets called, and then also make sure that any place that calls it has access to that dependency, potentially changing their constructors. You just avoid having to think about that kind of tracking entirely.
|
# ? Mar 18, 2017 00:11 |
|
Volguus posted:It shouldn't obfuscate anything in a properly designed library. But I agree about .Net. I had the "pleasure" to work with it in the last 6 months or so. Coming from C++ and Java, C# looks like a better Java. It is a better language no question about it. But (and here is when the fun comes in), everything in the .Net ecosystem, the tools, the libraries, the frameworks, look like an undernourished child from Somalia with Down syndrome that was just rescued by a compassionate person when compared with your normal, healthy and properly fed (almost obese) Western boy (Java). I like C# as a language and it works fairly well in Microsoft environments. However, I really prefer Linux on the server side and mono is/was a bit of a mess and I was hoping .Net Core would be the way to go for cross-platform web apps. I really like working using "await" with asynchronous calls versus callback, etc.. and Java doesn't really have anything that easy to work with. Overall though, Java is pretty nice for cross-platform work since JavaFX was made available, but Java seems to have a huge image problem with the public perception of it still based on how things were 10+ years ago when Java apps used Swing and looked horrible and the performance wasn't great. I did recently write a cross-platform utility that does a lot of multi-threaded processing with a JavaFX GUI and H2 database for data storage/access and I was impressed with how easy it was. There's still a bit of pain on the packaging side and issues with 125% scaling factors on high res displays, but that's about it. e: I read a comparison of cross-platform dev tools recently and one article's list of cons for Java were basically: Have to have JVM (uh.. ok... so?), Versioning nightmare (what? Let me introduce you to .Net core), Ugly GUI - but they fixed that plus versioning! (article's comment, so non-point), weak code portability (what? seriously?). But that type of sentiment about Java seems to be fairly common. Dr. Stab posted:It means that you don't need to track everywhere that each dependency is used. And it's not really magic. It's just a matter of making an object that holds all of your dependencies and passing that around, rather than passing each dependency around individually. Whether or not DI is handled as "magic" or not depends on the DI framework I suppose. In ASP.Net MVC it is handled as "magic". You define service in your app startup, then add constructor parameters for the services you want injected in your controllers and .Net both instantiates the controller and injects dependencies completely automatically at runtime. The primary point is that I don't like things that just "work" without clear, in code, connections. Avoiding having to think about it is part of the problem. I've worked with too many lovely coders that would do horrible things because they don't know how things are working and instead of taking the time to figure that out will just make assumptions or bumble around until their code just happens to compile and work somehow with no real knowledge of how or why things are actually working. Automatic/hidden functionality also increases the initial learning curve and with the rate of languages, frameworks, etc.. lately that's becoming a significant factor. Then again, over the last 2-3 months I've had to learn Wordpress, PHP, .Net Core, Boostrap, Semantic UI, Grav CMS, October CMS, loving ColdFusion 11, Penetration Testing & NetSec, etc.. etc.. and the first two already had me wanting to throw up in my mouth the entire time and I think I'm a bit burned out. I'd worked with PHP before but only as little as I possibly could get away with while porting Moodle to MSSQL years back. RandomBlue fucked around with this message at 01:03 on Mar 18, 2017 |
# ? Mar 18, 2017 00:56 |
|
I don't get it, why would you *not* use a container if you could? Why subject yourself to more resistance to refactoring, more hassle and dry fail when adding a parameter or composing an object graph? The graph is already a dag, just implement it with a fluent container API and it's cool and done.
|
# ? Mar 18, 2017 01:07 |
|
Maybe I'm more cool with it because I'm used to that sort of poo poo being everywhere in js, with callbacks being thrown around everywhere, and frameworks are treated as the equivalent of adhering to standard control structures instead of using gotos, but this is basically the most benign form of it. You're passing your function to the framework, and the framework does a small amount of work and immediately calls it. It's not like promises or events or whatever else where you pass the function to the framework and then later on, at some point, it gets called. On the other hand, using reflection is a pretty hosed up thing to do.
|
# ? Mar 18, 2017 01:29 |
|
RandomBlue posted:Overall though, Java is pretty nice for cross-platform work since JavaFX was made available, but Java seems to have a huge image problem with the public perception of it still based on how things were 10+ years ago when Java apps used Swing and looked horrible and the performance wasn't great. Oh, you're talking about java desktop. That's not its strong point, it never was. I've done my share of swing and javafx applications in the last 20 years (and they're really nice and fast frameworks once you get to understand them and learn how to use them) but my advice would be to just not do it if you can help it. The image perception is not something you can change overnight, and the expectations change daily. Use java for the server, let it stay there and use "whatever" for the client(s). A bit of web, a bit of native, it all helps.
|
# ? Mar 18, 2017 03:32 |
|
Volguus posted:It's not even funny. Why do developers subject themselves to that abuse? C#'s superiority over Java at the language level is not maintained everywhere else. Not to mention Microsoft's almost comical tendency to make everything 100 times more complicated than it needs to be, then turning around and instead of fixing the old library, just releasing a brand new one with different (but just as bad) problems than the one before. If it weren't for Microsoft, being a C# developer would be comically easy and you could just get any rear end in a top hat college kid to write your web app. Microsoft writing poor documentation and doing crazy poo poo is my job insurance.
|
# ? Mar 18, 2017 04:23 |
|
I always used to do "manual" DI like this:C# code:
Now I'm doing ASP.NET Core, and because DI is built into the framework I'm happily using that instead. Using a DI container has some advantages, although it can really obfuscate your code if you do advanced poo poo when you wire up dependencies. Bruegels Fuckbooks posted:If it weren't for Microsoft, being a C# developer would be comically easy and you could just get any rear end in a top hat college kid to write your web app. Microsoft writing poor documentation and doing crazy poo poo is my job insurance. With all the .NET Core stuff the poor documentation has recently turned into no documentation. I now find myself regularly trawling around in the source code and old GitHub issues to figure out how to do things in ASP.NET Core.
|
# ? Mar 18, 2017 10:55 |
|
Did someone say dependency injection? https://github.com/alphagov/specialist-publisher/blob/96cc02d4acad9bddaa741d7547cb1cce34eb4ab7/app/lib/specialist_publisher_wiring.rb
|
# ? Mar 18, 2017 12:09 |
|
LOOK I AM A TURTLE posted:I always used to do "manual" DI like this: That's not really DI if you have that empty constructor that internally creates its dependencies. Especially if the injected version is purely for testing. You're still getting the issues with strong coupling in production, and you have to write all the DI management stuff anyway for your tests With a proper DI setup you can define all the dependencies, switch in test things as necessary (like providing a test database so everything that ends up with the DB dependency will hit that) and your whole thing will just work as normal. You don't need to add special 'for testing' code to everything, and if you just want to do some simple mocking you can because everything's decoupled
|
# ? Mar 18, 2017 12:11 |
|
baka kaba posted:That's not really DI if you have that empty constructor that internally creates its dependencies. Especially if the injected version is purely for testing. You're still getting the issues with strong coupling in production, and you have to write all the DI management stuff anyway for your tests To be honest I've never actually felt the need to switch between different implementations of anything in production. Like I said I do use a DI container now when working in ASP.NET Core, so I don't use the empty constructor approach anymore, but it doesn't make much of a difference for my purposes. For my use DI is entirely about being able to easily write unit tests, and in those tests I generally want to have custom mocks anyway. Not saying my way is the only way or even the best way though. LOOK I AM A TURTLE fucked around with this message at 12:26 on Mar 18, 2017 |
# ? Mar 18, 2017 12:19 |
|
Yeah it's more useful when you have more complicated stuff, with dependencies with their own dependencies with their own dependencies. Or when you move beyond unit testing and you need to check the whole thing works without using production resources while you run tests, or by simulating a particular environment But it's the kind of thing that's obviously a bit more work up front, but once it's done you can work with that as your stuff gets more complex. If you don't use it because you're working on something fairly simple, but then it starts to grow into something more involved, you might have to rewrite a bunch of it just to handle the new complexity
|
# ? Mar 18, 2017 12:39 |
|
RandomBlue posted:Frankly I hate "magic" things that just happen. It obfuscates what's actually going on and makes it much harder for someone unfamiliar with that type of magic or a particular code-base to actually understand what's happening for very little actual gain. You've saved yourself having to pass in a constructor parameter? While not code, here's an example I ran into not too long ago. There's a commercial product out there which monitors several device metrics that apparently does this, once you dig down and find it: code:
There are some problems with this approach: 1) top itself is fairly heavyweight; it looks at all processes (yes all of the PIDs out in /proc) in addition to device metrics, so that's a lot of unnecessary looking at THREE TIMES IN A ROW 2) -b is batch mode, which will spit out sequential execution rather than updating the terminal; -n is number of iterations, so this is running only once - the initial read. Which brings us to another layer of magic: how top gets the Cpu value. It does this by reading /proc/stat (the Mem & Swap are from another 'file' there), as do similar apps like dstat. This too sounds okay, at first. If you look at /proc/stat, you'll find that it is simply a bunch of counters since the system was brought up for User/System/Idle (and some other ones) for each cpu (and aggregate). Reading this shortly after boot would give you close to current utilization, but after a week? gently caress no. In order to do get current usage, you must read /proc/stat twice, subtract the first read values from the second read values, and then divide by the second read sum minus the first read sum. top even does this - but it does it on the SECOND read, and this script is only reading ONCE. So... welp. Evil_Greven fucked around with this message at 14:19 on Mar 18, 2017 |
# ? Mar 18, 2017 14:15 |
|
The term DI seems to just cover, "don't write lovely OOP code". Like this is how I write most of my C# yet nobody irl could explain this in simple enough terms until now.
|
# ? Mar 18, 2017 15:12 |
|
Yeah really it just means 'if your class depends on other objects, pass them in'. Or to put it another way, new is fuckin banned. It depends on who's evangelising but generally there's a bit of leeway, like it's fine to create simple objects like collections But 'pass in any dependencies' can end up being a bit complicated, if that sparks off a chain of other creating other dependencies and those need dependencies, and maybe some of those need to use the same object. So a framework handles that automatically, and you get some kind of god factory that will create whatever you want with everything filled in. It's not magic, it's just that all the boilerplate is automatically generated and abstracted away
|
# ? Mar 18, 2017 15:39 |
|
dougdrums posted:The term DI seems to just cover, "don't write lovely OOP code". Like this is how I write most of my C# yet nobody irl could explain this in simple enough terms until now.
|
# ? Mar 18, 2017 16:09 |
|
The most magic DI I've seen recently is how Spring can take a List<SomeInterface> and automatically fill it with one instance of each object that implements SomeInterface. Still not sure about that one.
|
# ? Mar 18, 2017 16:22 |
|
smackfu posted:The most magic DI I've seen recently is how Spring can take a List<SomeInterface> and automatically fill it with one instance of each object that implements SomeInterface. Still not sure about that one. The dotnet core DI container does this, too. Kinda rad. But definitely magic.
|
# ? Mar 18, 2017 17:38 |
|
smackfu posted:The most magic DI I've seen recently is how Spring can take a List<SomeInterface> and automatically fill it with one instance of each object that implements SomeInterface. Still not sure about that one. Think that's magic? Let's take SomeInterface, and dynamically generate a CachingLoggingSomeInterfaceProxy which logs all calls and caches their result~ https://msdn.microsoft.com/en-us/library/dn178467(v=pandp.30).aspx#sec6
|
# ? Mar 18, 2017 17:53 |
|
SupSuper posted:You could sum up every design pattern as "don't write lovely OOP code". Haha yeah I suppose that's true. I've gotten in the habit of writing practical C# in the form of a few classes+interfaces, and then having one sort of Widget : IWidget per WidgetLib assembly. The Widget class ends up being a sort of "patchboard" between internal and external classes. Do I have the idea? Or is there something else going on where I misunderstood?
|
# ? Mar 18, 2017 18:03 |
|
I think you should be careful with how far you take it. Use it to substitute things that are non-deterministic or represent system boundaries. Don't wire up dependencies for a bunch of pure functions. Otherwise, you can make it a lot harder than it needs to be to traverse your codebase. It's the same as the peril of excessive indirection. And it's also true that in 90% of the use cases, there will be 1 implementation ever provided in production, and fake one-off implementations for unit tests.
|
# ? Mar 18, 2017 18:26 |
|
DI really isn't all that magical if you understand that even the most complex frameworks are at their cores simply based on directed acyclic graphs and everything else is really just convenience functionality and slick wrappers. Each module has a subset of the nodes in the graph in the form of dependency providers along with a list of the dependency requests for each provider. Once all the modules are collected it simply matches each request up with the providers available and flags situations where there's no provider or too many providers as errors. Although if there are no providers it can perform searches for implicit providers and add those to the graph as needed. Spring's List<SomeInterface> functionality is really just a request that accepts multiple providers of SomeInterface and generates a new provider based on those. Other "magic" like scoping can also be understood in terms of graph manipulation. If you want to make a provider that returns the same value for the duration of some scope that's replacing the unscoped provider in the graph with a new provider that uses the original to produce its initial value. Singletons are just part of a single program-wide scope that's never cleared out like a request or session scope might be. And then once the graph is validated and complete, every request through the framework just traverses the graph as needed, generating everything on demand.
|
# ? Mar 18, 2017 19:57 |
|
I've harped on this before, but the real thing is that you should always be writing your code as if everything was a library — as small and independent a library as reasonably possible. Those libraries are then necessarily independently testable, and the only things that might explicitly need "DI" are the integration points that are defined in terms of those libraries, and you only really need that if the lower-level libraries add substantial complexity that would make the higher-level testing more difficult or expensive (e.g. if they require bringing up a whole GUI).
|
# ? Mar 18, 2017 21:28 |
|
fleshweasel posted:I think you should be careful with how far you take it. Use it to substitute things that are non-deterministic or represent system boundaries. Don't wire up dependencies for a bunch of pure functions. Otherwise, you can make it a lot harder than it needs to be to traverse your codebase. It's the same as the peril of excessive indirection. This is super important. We went overboard on DI in one of our projects - every class had an interface, which usually meant that every interface just had one class. Most of these were classes that could have been entirely static with pure functions. The tests for these were more of a hassle than if they'd been entirely static. Overall this resulted in a codebase that was annoying to navigate and also to develop in since the convention was to make an interface for your class, regardless of whether it was necessary or not. The point is, be judicious in how you apply DI.
|
# ? Mar 19, 2017 15:18 |
|
Bognar posted:The point is, be judicious in how you apply DI. Be judicious how you design and write code. Never make blanket rules (such as "all classes have an interface") unless you know you're working with people that require them, never say "never do X" unless it's followed by "unless", do use common sense and best practices to solve the problem at hand.
|
# ? Mar 19, 2017 18:09 |
|
G.K. Chesterton posted:In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it. This, but for coding best practice.
|
# ? Mar 19, 2017 18:46 |
|
Doom Mathematic posted:This, but for coding best practice. That's how you get things like openssl still supporting ultrix.
|
# ? Mar 19, 2017 19:21 |
|
Doom Mathematic posted:This, but for coding best practice. Ask me about the coding style guidelines enforced by clangformat that can't be expressed in clangformat...
|
# ? Mar 19, 2017 21:54 |
|
fritz posted:That's how you get things like openssl still supporting ultrix. If you know it's there to support ultrix then you passed the test and you can remove it.
|
# ? Mar 20, 2017 08:07 |
|
|
# ? Jun 7, 2024 17:54 |
|
Bognar posted:We went overboard on DI in one of our projects - every class had an interface, which usually meant that every interface just had one class.
|
# ? Mar 20, 2017 20:17 |