Shared publicly  - 
 
OK, Let's try it out. IoC container vs No-IoC container.
What do you think? +Hadi Hariri
[Please invite all who is relevant]
21
10
Adrian Magdas's profile photoAlexander Tank's profile photoOwen Phelps's profile photoSerhiy Kalinets's profile photo
40 comments
 
It's not either/or. A DI Container can be a very helpful tool, but it's not required. It should always be possible to first design the application architecture and then optionally fit a container into it.

Using a DI Container always carries an overhead. If you're very familiar with a particular DI Container, you may feel that this overhead is very small, but it's still there. At the minimum, you'd need to add the reference and configure the container. Even if you adopt a pure Convention over Configuration approach, you'd still need to define the convention.

Given that overhead, there's always going to be cases where it's not warranted. At the extreme, consider a command-line utility which is implemented in a single class (Hello World). Obviously in such a case, a DI Container is pure overhead.

As an application increases in complexity, there will come a time where the balance tips and a DI Container will provide more benefit than overhead. When that happens, it's time to add one to the project.

There are many parameters that influence when the tipping point occurs, and many of them are subjective.
 
+Mark Seemann I agree, but what I've extracted from twitter today is that are people that say that no matter what you shouldn't use IoC Container. I'm trying to understand their reasons, maybe I'm missing something.
For example: IoC Container is magic to young developers. While I don't agree, at least I can relate to some logic.
 
I think they are trying to say that the complexity that they bring far outweighs the advantages. I think they have a place
 
+Ariel Ben Horesh The most common argument against containers does indeed tend to be the complexity and level of abstraction involved. Personally, I don't buy that argument because: 1. Complexity is just an inherent part of software development that you can't escape. 2. Anyone refusing tools on the ground of abstractions should only be allowed to code in assembler.

However, there's another, more serious argument against DI Containers. With Poor Man's DI, you can compose an object graph like this:

var foo = new Foo(new Bar(new Baz()));

If, at any point during development, you change the constructor signature of any of those classes, you are going to get a compiler error (= rapid feedback).

However, when you introduce a DI Container, now you have something like this:

container.Register(Component.For<IFoo>().ImplementedBy<Foo>());
container.Register(Component.For<IBar>().ImplementedBy<Bar>());
container.Register(Component.For<IBaz>().ImplementedBy<Baz>());
var foo = container.Resolve<IFoo>();

Not only does this quadruple the amount of code involved, but more problematically, it removes a lot of our compile-time safety. Now, when you change the constructor signatures, you are not going to find out about it until run-time. This means that we lose information. This is not trivial. While a build server will be able to catch the compiler error in the first example, it's not going to catch the error from the second example because a build server typically doesn't attempt to run the application.

Since we lose feedback by adding a container, we must gain something else to make it worthwhile, and for many people that doesn't seem to be the case.

While I understand that point, I don't agree with that either, but we have to take a container further before it becomes worthwhile to use.

The compile-time error provided with Poor Man's DI is good, but can also get in the way and become a maintenance burden, because you'd need to fix the composition code every time you change a constructor signature.

This is where a DI Container can really shine. Replace the above container code with a single convention like this, and it begins to become worthwhile again:

container.Register(Classes
.FromAssemblyInDirectory(new AssemblyFilter("bin").FilterByName(an => an.Name.StartsWith("Ploeh")))
.Pick()
.WithServiceAllInterfaces());

Now you can add or remove services and interfaces and refactor constructor signatures to your heart's content, and as long as you follow a simple convention (use Constructor Injection), things will just work (most of the time).

We still lost the compiler feedback, but now we replaced it with another benefit. When the benefit of Auto-registration and Auto-wiring outweighs the overhead of introducing the container and defining the convention, then it makes sense to introduce a DI Container.
 
Doesn't testing help find the types of issues that you lost from compile-time checking?
 
I agree with +Mark Seemann argument that it quadruples the amount of code, in that kind of IoC implementation as described in his usage example. But the removal of compile time checking is only a problem if you're not testing your code, and in particular, if you're not testing your constructors. With a test harness in place and those constructors under test than this is a moot point.

I believe that the biggest benefit of IoC containers and DI is that it greatly increases the testability of your code. Without DI and an IoC container code is still testable you just have to be more conscious of the code you write to keep testability high. When following the patterns inherent in using an IoC container your code should be more testable. Just my .02, I could be totally wrong, its happened before. ;)
 
I have not built a system in over 3 years that has not used StructureMap. Dependency Injection is a key tool to modern software development.
 
Once you define some default conventions, the ability to quickly add interfaces to constructors and write tests with the appropriate mocks/stubs is huge. In my experience, 90% of the time I only need to setup the container & define the conventions, and from that point on I never have to think about the container. It just runs in the background and does what I expect it to do. When I need some more advanced configuration, I can dive into the detailed container configuration API and set it up as needed.

I rarely have problems with runtime constructor problems, but this has a lot to do with having the appropriate level of test coverage.
 
IoC is a tool in my toolbelt because a) as the size and complexity of my services grow, I've found DI to greatly reduce the need for wrappers, utility classes and helpers b) offering a large codebase an injection option can help with reusable, loosely coupled code c) I like stubbing fixtures for testing and b) for cloud deployments I like having configurable bootstrapping for different devices.

Not sure why someone would say to never use it, reminds me of people who say to never use <insert some design pattern here>.. As with all software, every pattern choice is an equation, not a rule.
 
+Hadi Hariri almost as extreme as ignoring everything else someone says for having their own opinion on the subject? :) I kid; I agree with the spirit of your argument.
 
+Hadi Hariri I used to feel that containers were complex/confusing abstractions that provided questionable benefit. That was before I was introduced to the concept of a convention-based container. Conventions can be easily explained to a junior developer, without them needing to understand all the internals of how the container works. I'd argue that having these conventions helps the junior developer to understand & implement proper design principles when building software. Once the junior developer has worked with the default conventions for a while, they can be slowly introduced to the more advanced container configuration options.
 
I find the whole debate a bit, odd...
A lot of the arguments that are made against DI containers here can just as easily be applied to any framework. Ill go as far and say that all points that i have read here so far are to generic to hold any specific value.
The only point that carries any weight for me, is the point that you lose compile time checking. But as +Omar Gonzalez points out, this can be mitigated by having proper tests. Something which you should have anyway.

In regards to 'overhead'. Yes you have to write abit more code to make sure your dependency's are hooked up, either by convention or by code. I don't consider this to be a problem. We are in the code writing business right? When i reach the point that i want to use a DI container in my application, writing extra code is not exactly on the top of my criteria list..

Was about to type more, but then i realized i would just be rehashing +Hadi Hariri post.
 
I think it's important in large applications to combine bootstrapping (IoC Configuration) with a type of application Kernel. The Kernel is a type of service that represents that start-up activity of the entire process. If you have a composite application, this changes a bit. Regardless, the point is the same. It also combines bootstrapping activities (including IoC configuration) along appropriate layers of abstraction. In practice this means that I wont be putting service layer interface registrations alongside low level data access interface registrations. I've worked in systems where you've got one huge registration "region" with hundreds of registrations. It's not very enjoyable. Of course, conventions help to simplify this, but there are scenarios that conventions do not cover 100%. The idea here is to apply SRP to the IoC configuration classes. The Kernel also makes it easy to configure the system for testing. It can include a fluent interface like Kernel.WithServicesMocked.Start(); which kicks off the appropriate bootstrapping for that type of activity.

That being said, I see containers to be a useful tool for wiring up the object graphs in a monolithic application. However, as I think about building an SOA, the individual application processes (services) become quite small because of their autonomy. Many of the "best practices" become in fact extra code bloat and unnecessary weight in such small applications. Things like relational database storage, ORMs, DI Containers, heavy OOP, and the like become things to evaluate for their appropriateness in each service. They are no longer the de-facto for everything everywhere.
 
Many (.NET) containers support modular registration: Castle has Installers, StructureMap has Registries, Autofac has Modules, and for Unity you can repurpose Container Extensions for that role.
 
For me, the only factor in determining whether I should use an IoC container is whether it's ok to make the IoC container a dependency on anybody that uses the library.

For example, let's say that I write a SuperCoolHelper library for doing super-cool things. If I tag that guy with a dependency on StructureMap, AutoFac, or whatever, I'm also tagging that dependency on anybody who uses the library.

That dependency on the IoC container is the main problem. It's not a problem for someone to add a reference to SuperCoolHelper.dll, but it is a problem if someone has to also add references to Castle.Windsor.dll, Castle.Core.dll, etc. That's too big of a bite to take on, especially for someone who has already set up IoC registrations using a different container.

That issue aside, I have no idea why anybody would write anything of any value without using an IoC container. Most of them have incredibly-easy convention-based approaches to registration. In fact, not using an IoC container is going to cause you to write a heck of a lot more code than otherwise. And uglier code, too.

Does anybody even use an IoC container like below? I just haven't seen this in years, I thought everybody knew about conventions now:


"However, when you introduce a DI Container, now you have something like this:

container.Register(Component.For<IFoo>().ImplementedBy<Foo>());
container.Register(Component.For<IBar>().ImplementedBy<Bar>());
container.Register(Component.For<IBaz>().ImplementedBy<Baz>());
var foo = container.Resolve<IFoo>();
"
 
what internet rule is it: that all discussions on IoC devolved into someone bringing up a service locator?
 
When writing a library, it's never OK to force a DI Container upon the user, and neither is it necessary: http://stackoverflow.com/questions/2045904/dependency-inject-di-friendly-library/2047657#2047657 A DI Container is a piece of application infrastructure. It has no place in a library, and I also think it would be very ill-placed in a framework.

And yes: lots of people write container registrations like that. In fact, neither Unity nor Spring.NET have features for Auto-registration, and to judge from most stats I've seen, Unity is by far the most commonly used container in .NET.

When it comes to other platforms, I don't know, but it's my impression that most Java developers have never heard about the concept of Auto-registration...
 
"DI / IoC tends to add flexibility before the need for it is demonstrated, potentially leading to premature selection of abstraction boundaries with concomitant unnecessary abstraction." - Barry Kelly

Nicely put. Containers have a tendency to flatten the layers of abstraction in a system before those layers have been adequately defined or even designed. This can be terrible or awesome depending on your style/experience. Terrible to the extreme when no thought has been given to the abstraction boundaries. Awesome to the extreme when you know exactly what you're doing and can execute quickly.
 
It surprises me that people still do registration like that, even with Unity. Unity is all I used in production, but it takes one tiny class with about five lines of code to tell Unity to run through certain assemblies, apply a IFoo => Foo convention, and then never register anything again except any exceptions to the convention.
 
I agree that forcing users of your library to have an IOC is both tacky and unnecessary.
 
+Darren Cauthon Yes, if you know a bit about (read-only) Reflection, it's pretty easy to write conventions for Unity manually. That's also the approach I describe in my book.

However, I don't find it surprising that most Unity users don't do that. Most of them have never encountered a DI Container before reading about Unity in some patterns & practices guideline, so they are learning about it as they use it. Since Unity doesn't have an API for Convention over Configuration, it's not very discoverable. Yes: one can easily define conventions with Reflection, but first one must get the idea. That's not where most of those people are (yet).
 
Can someone give me a really good, compelling reason to use a container?

I understand the concepts and the mechanics, but, after having worked on projects with and without containers, I haven't seen, and no-one's been able to describe, a real substantial benefit to using containers.

What's the killer application?
 
I have two sides to this question.
In many applications, a container of some sort is mandatory, because manually managing dependnecies (easy) and life cycle (hard!) automatically is a godsend.
The problem that I have with container application is that people tend to lean on them to much. All abstractions go through the container.

I much rather have drastically different abstractions. Let us take two common examples:

= Sending emails

With a container, you would define an IEmailSender, and do the rest. But that create tight coupling between the code and the email sending. If you want to do email sending on a background thread, or if you want queue if for later exection, you have to modify the code, and it is easy to create subtle dependecies in the time line. I execute a piece of code that I assume is going to do the work on a syncronous manner.
Common example include making a remote call to a server, nicely abstracted behind a interface. That takes time, it need retries, we decide to move it to a dedicate async process.
Because of the way the code is written, it is easy to assume that once that call has been made on the interface, we can do things to that item later on (for example, publish the id, send an action regarding it, etc).

I would much rather use an explicit task model, something like:

Tasks.ExecuteLater(new SendEmail { parameters } );

Tasks.ExecuteLater( new NotifyAboutNewCustomer { parameters });

That way, I abstracted out the actual task execution, and I am also making it clear what the timeline is.

This is part of an architecture that make it very clear that there is a distinction between accessing local state (allowed, fast, if it is down we are also down) and remote state (not allowed in a sync process, can fail, likely to be slow, requires retries).

= Notifications (customer went into overdraft)

Another example might be a bank example with a customer with different startegies for handling going into overdraft, check bouncing, etc.
Using a container, it is easy to do things like IOverdraftStrategy, but that creates a very rigid architecture, changing things becomes hard, because the shape of the code makes a lot of assumptions about its use.
Something like domain events or just stadard notifications with the option of acting on things is likely to be better. But the container makes it easy to inject things, leading to a lot of IFoo and IBar.
I don't like a lot of abstractions in my code. Ideally, there should be a limited number of them (say, half a dozen or so). Those would be things like Controllers, Views, Tasks, etc. Very high level things. Having a lot of interfaces and indirection. Having to compose a deeply nested object graph to perform an action is going to be a wieght on the architecture, even if the container can just make things happen.
 
I pretty much agree with what +Mark Seemann said.

I typically start without a container these days. I still construct the application in the same way as if I had one however. I setup a Composition Root. In our last new service we built we did a whole sprint without a container. In the 2nd sprint we hit the tipping point +Mark Seemann described where I decided lifetime management and composing larger object graphs were now a pain point so we rewrote the Composition Root with a container. The rest of our service really didn't change and we kept the container private so that it wasn't exposed to abuse. It took maybe an hour or two to redo our Composition Root. No biggie.

The Composition Root reminds me of a quote I heard in an Uncle Bob Martin session which I think was along the lines of: "Good architecture lets you defer decisions". I could put off the choice for an entire sprint then make the decision when it was time to make a call. This wouldn't be possible if the container and application composition was scattered (Service Locator?). That would impact a lot of code.

I see the value in containers, I just don't see value in adding one just because. File->New Project->Add Reference (you're favourite container). Where's the analysis in that? I'll add one when it starts to become beneficial. I don't see the value in adding something when it doesn't have a chance to provide any benefits.
 
+Ayende Rahien Interesting. I never thought of it that way.

On the flip side I have seen people abuse events. I've seen MVVM applications that communicate without an interface over an Event Aggregator but they just ended up creating request/response over the EA. Fast forward a few months later and some other navigation scenarios have popped up and you end up with some really weird temporal coupling and things not happening in the order you need. That can also get you into a real mess.

I guess you have to be careful not to go too far to either side.
 
+Hadi Hariri One thing I've realized is there is a crowd that enjoys the "cool" factor of just being different. Anything that becomes mainstream becomes something they will contradict. It's near impossible to have a conversation about the pros/cons in that environment. Often these things will be called "enterprisey" and that's the argument. Whatever :)
 
+Kelly Sommers I would agree that anything can be abused, sure. But I would strongly agree that request response will lead to abuse in the short and long term :-)
 
I used to use containers (SM, Castle) regularly and now work somewhere where Ninject is used in a lot of apps. I've also delivered applications with Spring in the Java world. About a couple of years ago I started to question the value which containers were providing me, my teams, the companies I was working with, and the applications being built, and being maintained.

I came to the conclusion that in all cases I could not see an argument for their continued use. Used simply container frameworks provide a way of describing compositions of objects, and when they should no longer have references to them held. Even when just these simple capabilities were used I could see no advantage over just instantiating the object that's needed to perform a task and providing it with the dependencies it requires, and obviously pushing as far to the outside of the system (the onion if you like) as possible the responsibility to do this. Through simple and proven practices like refactoring and generally an outside-in approach the code that is left is clean, appropriate, simple, and most idiomatic C#.

As for using the more 'advanced' capabilities that some container frameworks ship with - things like proxied pipelining forms of AOP, and autowiring (let alone some of the automocking meets container stuff I've seen). I just think it generally makes for applications which are harder to understand, and therefore progressively more expensive to build and to maintain (and encourages what I would consider bad practices - but I don't want to go there here).

When it comes to the 'junior developer' question. I would rather that an developer I work with spends their time mastering the basics of clean OO development, than has to invest any time in learning frameworks which make these things opaque. I don't doubt they can learn a modern .NET container fx without too many WTF moments (there are always some when learning any fx), I just don't see that the cost/benefit equation is in favour of this being necessary.

This is not an argument though against frameworks/libraries in general.

If I need logging, for example, I will tend to use a library to do this - me I use Log4Net, but it could perhaps equally be the logging capabilities inbuilt in the .NET framework. Another example for me is testing libraries. I've enjoyed conversations with developers who now never use a third party testing framework. I respect their views (which I won't try to represent) but I can't say that their experience aligns with mine. So I continue to use NUnit, and on those occasions where I need it, Moq. Personally I prefer to avoid things like MSpec because I think that they make the simple more complex, for no real gain. But again, I see colleagues using it and understand that they think differently and respect that. We each in our contexts (which include our own experiences, not just our projects, teams, responsibilities, freedoms, workplaces, etc...) make these cost-benefit decisions and, I hope, continue to evaluate and challenge them as our contexts inevitably change.

There were some interesting conversations on the GOOS list some time back which covered a lot of this ground. Whilst some of the arguments are less relevent in a .NET context (we tend not to use external XML configuration anymore, nor to 'decorate' (pollute IMO) our types) the large part of the arguments raised there are I think I highly relevant, and I would be very interested to read later how others, particularly those more partial to containers, feel about these arguments. The link to one of these threads, the more interesting I think, is here: https://groups.google.com/forum/#!searchin/growing-object-oriented-software/lexical$20scoping/growing-object-oriented-software/VO3qVY6C_nw/RKbhr-2GRhQJ

Anyway, my 5 cents worth. I'm not interested in convincing anyone to my current view (not unless I'm in a pub, or I'm going to work with you), this is merely a small experience report on my personal journey.
 
Showing my complete ignorance here, but is anybody from the no-container camp able to provide a sample that shows something like Asp.Net Mvc so I can better understand how this would work. I'm happy with using an IoC and had to work with god awful code that new'd everything up in the controllers but due to the request lifetime issues I'm not seeing how I would implement loosely coupled code that I could safely use without ending up with issues.
 
+Nathan Gloyn I think those people will tell you that they 'can't be bothered with anything as tiresome and clunky' as ASP.NET MVC, so therefore they don't need DI Containers ;)
 
I think less of that thanks Mark - as it happens it IS quite hard to build a proper clean system on top of ASP.NET MVC due to the way it builds/requests various parts of its pipeline from a container at various stages - ironically enough before they made it "container friendly", it was better for Plain Old OO development.

The only way you're going to be able to use ASP.NET MVC without writing horrible bootstrap code or a container is to use it for routing and views only and build up the rest of the codebase yourself (much like you would if you were writing a NodeJS app, "thanks for the request, now I'll do what I need with it").

Funnily enough a lot of my attitude over these things have arisen as a consequence of doing NodeJS type developmment, because you're generally left to get on with your own devices and let the natural abstractions emerge.
 
I didn't understand the benefit of IoC until I gave it a proper try (using StructureMap). I certainly don't use it in every case but I do find it a very worthwhile thing to have in my toolbox. Oh... and in answer to +Darren Cauthon's question, yes I still have code that looks like that... partly because I'm abusing the system and have situations with multiple concrete classes implementing the same interface (using a name to identify the desired instance), and partly because the code is still working as-is so I haven't gone back to refactor it yet. :-)
 
Hadi - why the leap towards a framework when I mention NodeJS? I don't use ExpressJS as it happens - I just take a request and response object and do things with them - building and pulling in libraries for certain tasks as required
 
No - I use libraries to help with all the common tasks
 
The difference is I control my entry points, and I choose which specific libraries to use based on each individual use case. Please don't be facetious.
 
I think Rob's blog post link was one of the better anti-container arguments I've read. I still feel that a lightweight, auto-wiring container encourages refactoring by allowing easy adding/removing/splitting of dependencies.

I wonder if some of the backlash against containers comes from "interface diarrhea" that I see in more static languages like Java and C# where every class has a corresponding interface. The nice thing about dynamic languages like Node, Ruby, Python, etc. is they make programming to interface (the concept) much easier. Not that it can't be done in C#, just that most people tend to prematurely create interfaces for everything.
Add a comment...