Quino 2: Starting up an application, in detail

Published by marco on in Programming

As part of the final release process for Quino 2, we’ve upgraded 5 solutions[1] from Quino 1.13 to the latest API in order to shake out any remaining API inconsistencies or even just inelegant or clumsy calls or constructs. A lot of questions came up during these conversions, so I wrote the following blog to provide detail on the exact workings and execution order of a Quino application.

I’ve discussed the design of Quino’s configuration before, most recently in API Design: Running an Application (Part I) and API Design: To Generic or not Generic? (Part II) as well as the three-part series that starts with Encodo’s configuration library for Quino: part I.

Quino Execution Stages

The life-cycle of a Quino 2.0 application breaks down into roughly the following stages:

  1. Build Application: Register services with the IOC, add objects needed during configuration and add actions to the startup and shutdown lists
  2. Load User Configuration: Use non-IOC objects to bootstrap configuration from the command line and configuration files; IOC is initialized and can no longer be modified after action ServicesInitialized
  3. Apply Application Configuration: Apply code-based configuration to IOC objects; ends with the ServicesConfigured action
  4. Execute: execute the loop, event-handler, etc.
  5. Shut Down: dispose of the application, shutting down services in the IOC, setting the exit code, etc.

Stage 1

The first stage is all about putting the application together with calls to Use various services and features. This phase is covered in detail in three parts, starting with Encodo’s configuration library for Quino: part I.

Stage 2

Let’s tackle this one last because it requires a bit more explanation.

Stage 3

Technically, an application can add code to this phase by adding an IApplicationAction before the ServicesConfigured action. Use the Configure<TService>() extension method in stage 1 to configure individual services, as shown below.

  s => s.Behavior = FileLogBehavior.MultipleFiles

Stage 4

The execution phase is application-specific. This phase can be short or long, depending on what your application does.

For desktop applications or single-user utilities, stage 4 is executed in application code, as shown below, in the Run method, which called by the ApplicationManager after the application has started.

var transcript = new ApplicationManager().Run(CreateApplication, Run);

IApplication CreateApplication() { … }
void Run(IApplication application) { … }

If your application is a service, like a daemon or a web server or whatever, then you’ll want to execute stages 1–3 and then let the framework send requests to your application’s running services. When the framework sends the termination signal, execute stage 5 by disposing of the application. Instead of calling Run, you’ll call CreateAndStartupUp.

var application = new ApplicationManager().CreateAndStartUp(CreateApplication);

IApplication CreateApplication() { … }

Stage 5

Every application has certain tasks to execute during shutdown. For example, an application will want to close down any open connections to external resources, close file (especially log files) and perhaps inform the user of shutdown.

Instead of exposing a specific “shutdown” method, a Quino 2.0 application can simply be disposed to shut it down.

If you use ApplicationManager.Run() as shown above, then you’re already sorted—the application will be disposed and the user will be informed in case of catastrophic failure; otherwise, you can shut down and get the final application transcript from the disposed object.

var transcript = application.GetTranscript();
// Do something with the transcript…

Stage 2 Redux

We’re finally ready to discuss stage 2 in detail.

An IOC has two phases: in the first phase, the application registers services with the IOC; in the second phase, the application uses services from the IOC.

An application should use the IOC as much as possible, so Quino keeps stage 2 as short as possible. Because it can’t use the IOC during the registration phase, code that runs in this stage shares objects via a poor-man’s IOC built into the IApplication that allows modification and only supports singletons. Luckily, very little end-developer application code will ever need to run in this stage. It’s nevertheless interesting to know how it works.

Obviously, any code in this phase that uses the IOC will cause it to switch from phase one to phase two and subsequent attempts to register services will fail. Therefore, while application code in stage 2 has to be careful, you don’t have to worry about not knowing you’ve screwed up.

Why would we have this phase? Some advocates of using an IOC claim that everything should be configured in code. However, it’s not uncommon for applications to want to run very differently based on command-line or other configuration parameters. The Quino startup handles this by placing the following actions in stage 2:

  • Parse and apply command-line
  • Import and apply external configuration (e.g. from file)

An application is free to insert more actions before the ServicesInitialized action, but they have to play by the rules outlined above.

“Single” objects

Code in stage 2 shares objects by calling SetSingle() and GetSingle(). There are only a few objects that fall into this category.

The calls UseCore() assembly and UseApplication() register most of the standard objects used in stage 2. Actually, while they’re mostly used during stage 2, some of them are also added to the poor man’s IOC in case of catastrophic failure, in which case the IOC cannot be assumed to be available. A good example is the IApplicationCrashReporter.

Executing Stages

Before listing all of the objects, let’s take a rough look at how a standard application is started. The following steps outline what we consider to be a good minimum level of support for any application. Of course, the Quino configuration is modular, so you can take as much or as little as you like, but while you can use a naked Application—which has absolutely nothing registered—and you can call UseCore() to have a bit more—it registers a handful of low-level services but no actions—we recommend calling at least UseApplication() to adds most of the functionality outlined below.

  1. Create application: This involves creating the IOC and most of the IOC registration as well as adding most of the application startup actions (stage 1)
  2. Set debug mode: Get the final value of RunMode from the IRunSettings to determine if the application should catch all exceptions or let them go to the debugger. This involves getting the IRunSettings from the application and getting the final value using the IApplicationManagerPreRunFinalizer. This is commonly an implementation that can allows setting the value of RunMode from the command-line in debug builds. This further depends on the ICommandSetManager (which depends on the IValueTools) and possibly the ICommandLineSettings (to set the CommandLineConfigurationFilename if it was set by the user).
  3. Process command line: Set the ICommandProcessingResult, possibly setting other values and adding other configuration steps to the list of startup actions (e.g. many command-line options are switches that are handled by calling Configure<TSettings>() where TSettings is the configuration object in the IOC to modify).
  4. Read configuration file: Load the configuration data into the IConfigurationDataSettings, involving the ILocationManager to find configuration files and the ITextValueNodeReader to read them.
  5. The ILogger is used throughout by various actions to log application behavior
  6. If there is an unhandled error, the IApplicationCrashReporter uses the IFeedback or the ILogger to notify the user and log the error
  7. The IInMemoryLogger is used to include all in-memory messages in the IApplicationTranscript

The next section provides detail to each of the individual objects referenced in the workflow above.

Available Objects

You can get any one of these objects from the IApplication in at least two ways, either by using GetSingle<TService>() (safe in all situations) or GetInstance<TService>() (safe only in stage 3 or later) or there’s almost always a method which starts with “Use” and ends in the service name.

The example below shows how to the ICommandSetManager[2] to load configuration data.

application.GetSingle<ICommandSetManager>(); // Prefer the one above

All three calls return the exact same object, though. The first two from the poor-man’s IOC; the last from the real IOC.

Only applications that need access to low-level objects or need to mess around in stage 2 need to know which objects are available where and when. Most applications don’t care and will just always use GetInstance().

The objects in the poor-man’s IOC are listed below.


  • IValueTools: converts values; used by the command-line parser, mostly to translate enumerate values and flags
  • ILocationManager: an object that manages aliases for file-system locations, like “Configuration”, from which configuration files should be loaded or “UserConfiguration” where user-specific overlay configuration files are stored; used by the configuration loader
  • ILogger: a reference to the main logger for the application
  • IInMemoryLogger: a reference to an in-memory message store for the logger (used by the ApplicationManager to retrieve the message log from a crashed application)
  • IMessageFormatter: a reference to the object that formats messages for the logger

Command line

  • ICommandSetManager: sets the schema for a command line; used by the command-line parser
  • ICommandProcessingResult: contains the result of having processed the command line
  • ICommandLineSettings: defines the properties needed to process the command line (e.g. the Arguments and CommandLineConfigurationFilename, which indicates the optional filename to use for configuration in addition to the standard ones)


  • IConfigurationDataSettings: defines the ConfigurationData which is the hierarchical representation of all configuration data for the application as well as the MainConfigurationFilename from which this data is read; used by the configuration-loader
  • ITextValueNodeReader: the object that knows how to read ConfigurationData from the file formats supported by the application[3]; used by the configuration-loader


  • IRunSettings: an object that manages the RunMode (“release” or “debug”), which can be set from the command line and is used by the ApplicationManager to determine whether to use global exception-handling
  • IApplicationManagerPreRunFinalizer: a reference to an object that applies any options from the command line before the decision of whether to execute in release or debug mode is taken.
  • IApplicationCrashReporter: used by the ApplicationManager in the code surrounding the entire application execution and therefore not guaranteed to have a usable IOC available
  • IApplicationDescription: used together with the ILocationManager to set application-specific aliases to user-configuration folders (e.g. AppData\{CompanyTitle}\{ApplicationTitle})
  • IApplicationTranscript: an object that records the last result of having run the application; returned by the ApplicationManager after Run() has completed, but also available through the application object returned by CreateAndStartUp() to indicate the state of the application after startup.

Each of these objects has a very compact interface and has a single responsibility. An application can easily replace any of these objects by calling UseSingle() during stage 1 or 2. This call sets the object in both the poor-man’s IOC as well as the real one. For those rare cases where a non-IOC singleton needs to be set after the IOC has been finalized, the application can call SetSingle(), which does not touch the IOC. This feature is currently used only to set the IApplicationTranscript, which needs to happen even after the IOC registration is complete.


Two large customer solutions, two medium-sized internal solutions (Punchclock and JobVortex) as well as the Demo/Sandbox solution. These solutions include the gamut of application types:

  • 3 ASP.NET MVC applications
  • 2 ASP.NET WebAPI applications
  • 2 Windows services
  • 3 Winform/DevExpress applications
  • 2 Winform/DevExpress utilities
  • 4 Console applications and utilities

I originally used ITextValueNodeReader as an example, but that’s one case where the recommended call doesn’t match 1-to-1 with the interface name.

application.GetConfigurationDataReader(); // Recommended
[3] Currently only XML, but JSON is on the way when someone gets a free afternoon.

5 days Ago

Apple Photos: a mixed review

Published by marco on in Technology

 A few months back, Apple replaced iPhoto with Photos.

There are some good things about it. It’s noticeably faster on my machine and, at the same time, seems to use less RAM (at least at first; see below). These are good things. However, the speed and space improvements come at the cost of a mysterious loss of functionality.

I call this lack mysterious because Apple didn’t just replace iPhoto with Photos—it claims to have merged iPhoto with Aperture, which is/was a much more powerful product. I would have assumed that iPhoto users would be bowled over by the addition of new functionality and that Aperture users would be the ones left feeling hamstrung by the update.

Instead, there are navigational and editing features missing to which I’d grown quite accustomed in iPhoto.[1]

“Show Original” is missing

You can no longer see the original picture at the push of a button. This was an extremely useful feature as it allowed you to quickly see how much you’d changed the picture. Just hold down shift in iPhoto and it shows you the original photo so you could quickly compare it to the current version.

In Photos, you can only revert to the original, which throws away all of your changes. While this is a good feature, it is no way a replacement for “Show Original”.

I would rather have seen them improve this feature to allow you to compare the last n edits until you’d switched pictures or saved. After that, only the original and the latest versions need be available (which both Photos and iPhoto are already keeping anyway).

Picture-title editing is broken

You can no longer edit multiple picture titles quickly by tabbing through the picture titles. In Photos, hitting the Tab key sends the keyboard focus into the aether, lost until you pick up the mouse and manually click into the next picture title. For an application that lets you manage dozens of thousands of photographs and publish them to various media, this is a very sad regression.

“Set Location” is gone

You can no longer set the location of a picture. You can only remove location information (presumably for publishing to services that don’t strip the location for you) or “Revert to Original Location”. The latter tantalizingly suggests that there is some way to actually change the location, but after a fruitless search, I had to conclude the feature is only able to restore the location should you have inadvertently removed it. Granted, this feature was not stellar in iPhoto, but it worked. It had a tiny, impractical UI, used an inordinate amount of RAM and the suggestions list left a lot to be desired, but it worked well enough to put the pin on the map for your pictures in more-or-less the right spot. If it’s still possible, it’s damned hard to find.

Edit: since I wrote the bit above, Photos has gotten an update and you can actually enter your own location again. It works quite well and feels more reliable that iPhoto’s feature. However, it’s in the undockable and non–keyboard-friendly “Info” window, which means you’ve got a lot of clicking ahead of you. It would also have been nice if they’d included a “Set Location…” menu item to make the feature discoverable.

Face Recognition is Hit or Miss

iPhoto’s face-recognition feature had its drawbacks, but it at least tended to highlight only actual faces. It didn’t seem to learn very well, but it didn’t mark random bits of scenery as potential faces. The recognition algorithm in Photos has gone completely off the grid in some cases.

As with the location feature, though, recent updates to face-recognition have made it productive and fun again to select faces. Sometimes, though, it’s suggestions seem completely crazy. Just when you think: “wow, it picked my face when I’m looking for pictures of my Dad. I must look like him at some digitally recognizable level.”, Photos then also thinks that random bits of scenery and a slew of women all also look like my Dad.

Face-Recognition UI is limiting

Though the new selector is in some ways nicer than the old one, it still takes too many clicks to finish up identifying faces. If you see someone you recognize, you have to click the picture and type in the name. If Photos finds others it thinks are that person, then you can select those. If Photos doesn’t, it congratulates you that you found one picture, makes you click OK and takes you back to the chooser to select the next photo. If it misidentifies a person, you can only say that it’s not the person it suggested, but you can no longer tell it who that person is. You also can’t tell it from that screen that this is not a picture of a person. Also, if there is more than one person in the picture, you can’t identify the other people in that picture from there either. You have to wait and hope that Photos gets around to letting you select that person.

Missing context in Face Recognition

While it’s nice that Photos now shows the face in context of the whole picture (zooming in on the face when you hover it), you still can’t jump to the photo in the “Moment” or event to see more context. And, even if you do manually go to the “Moment”, you can’t easily tab through the faces anymore. It kind of works, but you can get to the first face only by mashing on tab several times. It’s not obvious where the focus is before, so be careful or you’ll go too far. If you tab past the face you wanted, don’t expect to be able to shift + tab back—the focus then gets stuck on the list and you’re left to pick up the mouse.

Choosing a Person in Faces

The drop-down list of people’s names is better than in iPhoto (it no longer feels like it’s gobbling memory just to show the thing), but it’s still not sorted by recently used names. It’s always alphabetical, which means if you have to type a person’s whole first name and first letter of their last name just to skip the other person with the same first name who you almost never select, then you can just go ahead and do that every single time. Also, it only searches your text from the beginning of the name, so you can’t type a unique piece of a person’t last name to select that name quickly.

What it does do nicely is pick up and match information from Contacts. If you have named the person in your contacts differently than in Photos (or vice versa), changes you make to Contacts are picked up immediately, which is nice.

No Historical Navigation

You can search people by name and it shows their pictures, grouped by year/event. However, if you searched that person to fix a typo in their name, you’re out of luck because you can’t change the name of the virtual “folder” that you have open. Instead, you have to cancel out of the search and manually scroll through the list of faces, searching for the one you want. My list has over 450 faces in it. Also, the list is sorted by the number of pictures of that person, in descending order. Hooray. This is not particularly conducive to finding a particular person.

A better search would have stayed on the main page with the faces in bubbles, restricting the ones displayed to the ones that match the search text. Double-clicking on a single face would take you to the search results.

Make Key Photo is not so easy

The “Make Key Photo” option isn’t available where you’d expect it to be. If you’re not explicitly in the “Faces” area, you can’t make a key photo. If you stay in the faces area, you can’t really search for pictures as well as when you’re browsing “Moments”. If you browse too far, good luck jumping back to where you were: though the navigation sometimes feels historical, it’s actually spatial. So if you jump to “Moments” from “Faces”, you can’t easily jump back to where you were in Faces.

A lot of this confusion could be solved with a more web-like historical navigation and, for the love of all that is holy, just capitulating and putting stuff in the menus again or using that evil, evil shortcut menu. Apple’s drive to make a desktop application function just like a hamstrung UI made for tablets or phones where you can only fingerpaint on the screen is quite apparent.

Random list of Unnamed Faces

As you work with “Faces”, you start to get the impression that there are pictures with unnamed faces that aren’t in the list at the bottom. You would be correct. The list at the bottom—which is the only way to start identifying faces—only shows some of the pictures. If scroll all the way to the right and identify that face or mark that picture as “Not a Face”, a new picture will slide in from the right, where many, many more are waiting out of your reach. You will be patient and work with the ones that Photos has decided you will work with.

If the list is long, when you ignore or name a face, Photos animates the removal. It does this, however, only after it’s actually removed the picture, which takes a few hundred milliseconds. The lag here makes it difficult to ignore multiple faces quickly. At least you can select multiple faces—I was going to be all snarky about this being a missing feature until I actually tried it and discovered to my surprise that it worked.

It would also be nice if you could switch between “browse” and “edit” mode. As it stands, 80% of the screen is taken up by a pretty browser. If you’re identifying faces, you’d actually rather use all of that space to show the unnamed faces instead of just scrolling horizontally in the bottom 20% of the UI.

Keyboard Support is Sad

There are shortcut keys, but you have to find them all yourself. None are marked or in the menus (e.g. delete without asking is + Delete, Go back is + up-arrow) and so on.

Memory Usage / Stability

Though the user experience is smoother and faster than in iPhoto, the stability is not much better. After using the facial recognition feature for about half an hour, Photos crashed and took quite a while to clean itself up before offering to restart itself. After a restart, I kept a closer eye on memory usage and, while it started off at a reasonable 500MB, it quickly climbed to 1.2GB after 10 minutes.

The Map and Overview are very cool

The events are gone and have been replaced with “Moments”. However, all of your data is organized by year and you can zoom out until you see a truly impressive number of thumbnails on one screen. Clicking on the place names jumps to a map with all of your pictures spread on it. Zoom in to see where the pictures were taken. This is much nicer and faster and smoother than it was in iPhoto.


Now that I can set the location again, there are no blocker issues to my use cases. It takes longer for me to name pictures now since I can’t just tab through them anymore, but at least it’s still possible. I would rather I could compare two versions of a picture, but it’s also not a dealbreaker. The original version of Photos left me quite cold, but the latest version is at least sufficient, though I understand that that’s not a ringing endorsement.

[1] I am 100% aware that Photos is free, as was iPhoto before it. That, in fact, OS X is also free for use and that I haven’t actually given Apple any money for anything since I bought my iMac over 6 years ago. This does not stop me from lamenting serious regressions in a tool that I’ve integrated into my workflow. I’ve looked for alternatives and came up with nothing. I’m kinda stuck on iPhoto/Photos for making my photo albums and would love for them to restore some of the features they so callously tossed to the side.

IServer: converting hierarchy to composition

Published by marco on in Programming

Quino has long included support for connecting to an application server instead of connecting directly to databases or other sources. The application server uses the same model as the client and provides modeled services (application-specific) as well as CRUD for non-modeled data interactions.

We wrote the first version of the server in 2008. Since then, it’s acquired better authentication and authorization capabilities as well as routing and state-handling. We’ve always based it on the .NET HttpListener.

Old and Busted

As late as Quino 2.0-beta2 (which we had deployed in production environments already), the server hierarchy looked like screenshot below, pulled from issue QNO-4927:

 Server class/interface hierarchy

This screenshot was captured after a few unneeded interfaces had already been removed. As you can see by the class names, we’d struggled heroically to deal with the complexity that arises when you use inheritance rather than composition.

The state-handling was welded onto an authentication-enabled server, and the base machinery for supporting authentication was spread across three hierarchy layers. The hierarchy only hints at composition in its naming: the “Stateful” part of the class name CoreStatefulHttpServerBase<TState> had already been moved to a state provider and a state creator in previous versions. That support is unchanged in the 2.0 version.

Implementation Layers

We mentioned above that implementation was “spread across three hierarchy layers”. There’s nothing wrong with that, in principle. In fact, it’s a good idea to encapsulate higher-level patterns in a layer that doesn’t introduce too many dependencies and to introduce dependencies in other layers. This allows applications not only to be able to use a common implementation without pulling in unwanted dependencies, but also to profit from the common tests that ensure the components works as advertised.

In Quino, the following three layers are present in many components:

  1. Abstract: a basic encapsulation of a pattern with almost no dependencies (generally just Encodo.Core).
  2. Standard: a functional implementation of the abstract pattern with dependencies on non-metadata assemblies (e.g. Encodo.Application, Encodo.Connections and so on)
  3. Quino: an enhancement of the standard implementation that makes use of metadata to fill in implementation left abstract in the previous layer. Dependencies can include any of the Quino framework assemblies (e.g. Quino.Meta, Quino.Application and so on).

The New Hotness[1]

The diagram below shows the new hotness in Quino 2.[2]

 Quino 2.0 Server Infrastructure

The hierarchy is now extremely flat. There is an IServer interface and a Server implementation, both generic in TListener, of type IServerListener. The server manages a single instance of an IServerListener.

The listener, in turn, has an IHttpServerRequestHandler, the main implementation of which uses an IHttpServerAuthenticator.

As mentioned above, the IServerStateProvider is included in this diagram, but is unchanged from Quino 2.0-beta3, except that it is now used by the request handler rather than directly by the server.

You can see how the abstract layer is enhanced by an HTTP-specific layer (the Encodo.Server.Http namespace) and the metadata-specific layer is nice encapsulated in three classes in the Quino.Server assembly.

Server Components and Flow

This type hierarchy has decoupled the main elements of the workflow of handling requests for a server:

  • The server manages listeners (currently a single listener), created by a listener factory
  • The listener, in turn, dispatches requests to the request handler
  • The request handler uses the route handler to figure out where to direct the request
  • The route handler uses a registry to map requests to response items
  • The request handler asks the state provider for the state for the given request
  • The state provider checks its cache for the state (the default support uses persistent states to cache sessions for a limited time); if not found, it creates a new one
  • Finally, the request handler checks whether the user for the request is authenticated and/or authorized to execute the action and, if so, executes the response items

It is important to note that this behavior is unchanged from the previous version—it’s just that now each step is encapsulated in its own component. The components are small and easily replaced, with clear and concise interfaces.

Note also that the current implementation of the request handler is for HTTP servers only. Should the need arise, however, it would be relatively easy to abstract away the HttpListener dependency and generalize most of the logic in the request handler for any kind of server, regardless of protocol and networking implementation. Only the request handler is affected by the HTTP dependency, though: authentication, state-provision and listener-management can all be re-used as-is.

Also of note is that the only full-fledged implementation is for metadata-based applications. At the bottom of the diagram, you can see the metadata-specific implementations for the route registry, state provider and authenticator. This is reflected in the standard registration in the IOC.

These are the service registrations from Encodo.Server:

return handler
  .RegisterSingle<IServerSettings, ServerSettings>()
  .RegisterSingle<IServerListenerFactory<HttpServerListener>, HttpServerListenerFactory>()
  .Register<IServer, Server<HttpServerListener>>();

And these are the service registrations from Quino.Server:

  .RegisterSingle<IServerRouteRegistry<IMetaServerState>, StandardMetaServerRouteRegistry>()
  .RegisterSingle<IServerStateProvider<IMetaServerState>, MetaPersistentServerStateProvider>()
  .RegisterSingle<IServerStateCreator<IMetaServerState>, MetaServerStateCreator>()
  .RegisterSingle<IHttpServerAuthenticator<IMetaServerState>, MetaHttpServerAuthenticator>()
  .RegisterSingle<IHttpServerRequestHandler, HttpServerRequestHandler<IMetaServerState>>()

As you can see, the registration is extremely fine-grained and allows very precise customization as well as easy mocking and testing.

[1] Any Men in Black fans out there? Tommy Lee Jones was “old and busted” while Will Smith was “the new hotness”? No? Just me? All righty then…
[2] This diagram brought to you by the diagramming and architecture tools in ReSharper 9.2. Just select the files or assemblies you want to diagram in the Solution Explorer and choose the option to show them in a diagram. You can right-click any type or assembly to show dependent or referenced modules or types. For type diagrams , you can easily control which relationships are to be shown (e.g. I hide aggregations to avoid clutter) and how the elements are to be grouped (e.g. I grouped by namespace to include the boxes in my diagram).

2 weeks Ago

Inverse Arrogance

Published by marco on in Quotes

“America is the only country in the world where failure to promote oneself is considered arrogant.”

2 months Ago

Iterating with NDepend to remove cyclic dependencies (Part II)

Published by marco on in Programming

In the previous article, we discussed the task of Splitting up assemblies in Quino using NDepend. In this article, I’ll discuss both the high-level and low-level workflows I used with NDepend to efficiently clear up these cycles.

Please note that what follows is a description of how I have used the tool—so far—to get my very specific tasks accomplished. If you’re looking to solve other problems or want to solve the same problems more efficiently, you should take a look at the official NDepend documentation.

What were we doing?

To recap briefly: we are reducing dependencies among top-level namespaces in two large assemblies, in order to be able to split them up into multiple assemblies. The resulting assemblies will have dependencies on each other, but the idea is to make at least some parts of the Encodo/Quino libraries opt-in.

The plan of attack

On a high-level, I tackled the task in the following loosely defined phases.

Remove direct, root-level dependencies
This is the big first step—to get rid of the little black boxes. I made NDepend show only direct dependencies at first, to reduce clutter. More on specific techniques below.
Remove indirect dependencies
 Direct and Indirect references (the Black Hole)Crank up the magnification to show indirect dependencies as well. This will will help you root out the remaining cycles, which can be trickier if you’re not showing enough detail. On the contrary, if you turn on indirect dependencies too soon, you’ll be overwhelmed by darkness (see the depressing initial state of the Encodo assembly to the right).
Examine dependencies between root-level namespaces

Even once you’ve gotten rid of all cycles, you may still have unwanted dependencies that hinder splitting namespaces into the desired constellation of assemblies.

For example, the plan is to split all logging and message-recording into an assembly called Encodo.Logging. However, the IRecorder interface (with a single method, Log()) is used practically everywhere. It quickly becomes necessary to split interfaces and implementation—with many more potential dependencies—into two assemblies for some very central interfaces and support classes. In this specific case, I moved IRecorder to Encodo.Core.

Even after you’ve conquered the black hole, you might still have quite a bit of work to do. Never fear, though: NDepend is there to help root out those dependencies as well.

Examine cycles in non-root namespaces

Because we can split off smaller assemblies regardless, these dependencies are less important to clean up for our current purposes. However, once this code is packed into its own assembly, its namespaces become root namespaces of their own and—voila! you have more potentially nasty dependencies to deal with. Granted, the problem is less severe because you’re dealing with a logically smaller component.

In Quino, use non-root namespaces more for organization and less for defining components. Still, cycles are cycles and they’re worth examining and at least plucking the low-hanging fruit.

Removing root-level namespace cycles

With the high-level plan described above in hand, I repeated the following steps for the many dependencies I had to untangle. Don’t despair if it looks like your library has a ton of unwanted dependencies. If you’re smart about the ones you untangle first, you can make excellent—and, most importantly, rewarding—progress relatively quickly.[1]

  1. Show the dependency matrix
  2. Choose the same assembly in the row and column
  3. Choose a square that’s black
  4. Click the name of the namespace in the column to show sub-namespaces
  5. Do the same in a row
  6. Keep zooming until you can see where there are dependencies that you don’t want
  7. Refactor/compile/run NDepend analysis to show changes
  8. GOTO 1

Once again, with pictures!

The high-level plan of attack sounded interesting, but might have left you cold with its abstraction. Then there was the promise of detail with a focus on root-level namespaces, but alas, you might still be left wondering just how exactly do you reduce these much-hated cycles?

I took some screenshots as I worked on Quino, to document my process and point out parts of NDepend I thought were eminently helpful.

Show only namespaces

 Show Namespaces involved in a Dependency Reference cycles in Encodo (Namespaces only)I mentioned above that you should “[k]eep zooming in”, but how do you do that? A good first step is to zoom all the way out and show only direct namespace dependencies. This focuses only on using references instead of the much-more frequent member accesses. In addition, I changed the default setting to show dependencies in only one direction—when a column references a row (blue), but not vice versa (green).

As you can see, the diagrams are considerably less busy than the one shown above. Here, we can see a few black spots that indicate cycles, but it’s not so many as to be overwhelming.[2] You can hover over the offending squares to show more detail in a popup.

Show members

 Reference cycles in Encodo (Members) Bind Matrix to Cluster Cycles TogetherIf you don’t see any more cycles between namespaces, switch the detail level to “Members”. Another very useful feature is to “Bind Matrix”, which forces the columns and rows to be shown in the same order and concentrates the cycles in a smaller area of the matrix.

As you can see in the diagram, NDepend then highlights the offending area and you can even click the upper-left corner to focus the matrix only on that particular cycle.

Drill down to classes

 Show Classes involved in a Dependency Use the handy arrow dependency-indicatorsOnce you’re looking at members, it isn’t enough to know just the namespaces involved—you need to know which types are referencing which types. The powerful matrix view lets you drill down through namespaces to show classes as well.

If your classes are large—another no-no, but one thing at a time—then you can drill down to show which method is calling which method to create the cycle. In the screenshot to the right, you can see where I had to do just that in order to finally figure out what was going on.

In that screenshot, you can also see something that I only discovered after using the tool for a while: the direction of usage is indicated with an arrow. You can turn off the tooltips—which are informative, but can be distracting for this task—and you don’t have to remember which color (blue or green) corresponds to which direction of usage.

Indirect dependencies

 The black hole is almost gone No more black squares and a 5-element cycle leftOnce you’ve drilled your way down from namespaces-only to showing member dependencies, to focusing on classes, and even members, your diagram should be shaping up quite well.

On the right, you’ll see a diagram of all direct dependencies for the remaining area with a problem. You don’t see any black boxes, which means that all direct dependencies are gone. So we have to turn up the power of our microscope further to show indirect dependencies.

On the left, you can see that the scary, scary black hole from the start of our journey has been whittled down to a small, black spot. And that’s with all direct and indirect dependencies as well as both directions of usage turned on (i.e. the green boxes are back). This picture is much more pleasing, no?

Queries and graphs

 NDepend Query Language View graph to see cycle from Culture to Enums (through Expression) Show indirect (zoom) in on enums & cultureFor the last cluster of indirect dependencies shown above, I had to unpack another feature: NDepend queries: you can select any element and run a query to show using/used by assemblies/namespaces.[3] The results are shown in a panel, where you can edit the query see live updates immediately.

Even with a highly zoomed-in view on the cycle, I still couldn’t see the problem, so I took NDepend’s suggestion and generated a graph of the final indirect dependency between Culture and Enums (through Expression). At this zoom level, the graph becomes more useful (for me) and illuminates problems that remain muddy in the matrix (see right).

Crossing the finish line

In order to finish the job efficiently, here are a handful of miscellaneous tips that are useful, but didn’t fit into the guide above.

 Encodo assembly is finally clean

  • I set NDepend to automatically re-run an analysis on a successful build. The matrix updates automatically to reflect changes from the last analysis and won’t lose your place.
  • If you have ReSharper, you’ll generally be able to tell whether you’ve fixed the dependencies because the usings will be grayed out in the offending file. You can make several fixes at once before rebuilding and rerunning the analysis.
  • At higher zoom levels (e.g. having drilled down to methods), it is useful to toggle display of row dependencies back on because the dependency issue is only clear when you see the one green box in a sea of blue.
  • Though Matrix Binding is useful for localizing, remember to toggle it off when you want to drill down in the row independently of the namespace selected in the column.

And BOOM! just like that[4], phase 1 (root namespaces) for Encodo was complete! Now, on to Quino.dll…


 Refactoring 82,087 Symbols…Depending on what shape your library is in, do not underestimate the work involved. Even with NDepend riding shotgun and barking out the course like a rally navigator, you still have to actually make the changes. That means lots of refactoring, lots of building, lots of analysis, lots of running tests and lots of reviews of at-times quite-sweeping changes to your code base. The destination is worth the journey, but do not embark on it lightly—and don’t forget to bring the right tools.[5]

[1] This can be a bit distracting: you might get struck trying to figure out which of all these offenders to fix first.
[2] I’m also happy to report that my initial forays into maintaining a relatively clean library—as opposed to cleaning it—with NDepend have been quite efficient.
[3] And much more: I don’t think I’ve even scratched the surface of the analysis and reporting capabilities offered by this ability to directly query the dependency data.
[4] I’m just kidding. It was a lot of time-consuming work.
[5] In this case, in case it’s not clear: NDepend for analysis and good ol’ ReSharper for refactoring. And ReSharper’s new(ish) architecture view is also quite good, though not even close to detailed enough to replace NDepend: it shows assembly-level dependencies only.